Book Review: Hidden in Plain View

This time the devil isn’t in the details. In Hidden in Plain View, Dr. Lydia McGrew revives a neglected, but intriguing argument for the historical reliability of the Bible. While most of historical apologetics typically features flashy arguments from archaeology or textual criticism, McGrew challenges us to look inside the texts themselves to find the earmarks of authenticity. One such indicator is what is called an “undesigned coincidence”. McGrew defines this as follows.

Hidden in Plain View: Undesigned Coincidences in the Gospels and Acts by Dr. Lydia McGrew. DeWard Publishing. 2017. $16.

An undesigned coincidence is a notable connection between two or more accounts or texts that doesn’t seem to have been planned by the person or people giving the accounts. Despite their apparent independence, the items fit together like pieces of a puzzle. (pg. 12)

McGrew provides an analogy of three friends: Alan, Betty, and Carl. Alan and Betty tell you they recently had an intervention with Carl, but Carl denies any conversation took place. In their own independent stories, Alan and Betty tell you they all three met at a coffee shop. Alan mentions that it was extremely crowded so they could barely find a table for the three of them; Betty mentions that Alan knocked his coffee in her lap during the conversation. Alan doesn’t mention the spill and Betty doesn’t mention the crowd. Nevertheless, we can see that these two details coincide nicely. A crowded table with little elbow room makes a spill more likely – especially a spill into someone’s lap rather than onto the table.

The kicker is that such a roundabout, offhand confirmation of the two stories is highly unlikely to be the product of collusion. Typically, colluders create obvious points for their audience to notice. Thus, subtle confirming evidences like this lend credence to the stories being true.

Once the concept of undesigned coincidences is in place, McGrew surveys the historical books of the New Testament pulling out the strongest examples. The cumulative effect is intended to demonstrate that (i) the Gospel writers provided a historically reliable account of the life and deeds of Jesus of Nazareth and (ii) the author of Acts provided a historically reliable account of the early church and Paul’s mission. As such, the book is divided into two parts.

Part I: The Four Gospels

  • The Synoptics Explain John
  • John Explains the Synoptics
  • The Synoptics Explain Each Other
  • Miscellaneous

Part II: Acts & Pauline Epistles

  • Connections Between the Universally Acknowledged Pauline Epistles
  • Connections Between the Other Pauline Epistles

In Part I, McGrew is conscious of the debates over literary dependency between the Gospels and explicitly constructs the argument so as to skirt most of the associated difficulties. Personally, I find the miscellaneous examples to be the most convincing because when the line of explanation is convoluted, it becomes exponentially more difficult to explain by authorial coordination. One of the more popular examples is the feeding of the five thousand. John’s account is as follows.

Jesus went away to the other side of the Sea of Galilee, which is the Sea of Tiberias. And a large crowd was following him, because they saw the signs that he was doing on the sick. Jesus went up on the mountain, and there he sat down with his disciples. Now the Passover, the feast of the Jews, was at hand. Lifting up his eyes, then, and seeing that a large crowd was coming toward him, Jesus said to Philip, “Where are we to buy bread, so that these people may eat?”

John 6:1-5 (ESV)

It seems peculiar that Jesus would ask Phillip, of all people, where to buy food. One might expect Peter or one of the other A-list apostles if this were a fabricated tale. However, there are two pieces that when brought together make for a nice explanation. In Luke’s account of the feeding, we are provided with the location as the city of Bethsaida.

On their return the apostles told him all that they had done. And he took them and withdrew apart to a town called Bethsaida. When the crowds learned it, they followed him

Luke 9:10 (ESV)

Earlier in John, we are told that Phillip is actually from the area.

The next day Jesus decided to go to Galilee. He found Philip and said to him, “Follow me.” Now Philip was from Bethsaida, the city of Andrew and Peter. Philip found Nathanael and said to him, “We have found him of whom Moses in the Law and also the prophets wrote, Jesus of Nazareth, the son of Joseph.”

John 1:43-45

Pulling all of these independent threads together, we have a satisfying explanation: Jesus asked Phillip where to buy food because they were near his home town of Bethsaida.

Part II focuses on the books of Acts and its connection to the Pauline epistles. As with the Gospels, McGrew is conscious of the debates over authorship, subdividing the section between the universally acknowledged Pauline epistles and those whose authorship is contested in scholarship.

The mileage varies on the strength of the arguments. Some are close to knock-down, others are more conjectural. However, (1) that’s the nature of historical data and (2) they don’t all have to be individually strong because the cumulative effect is stronger than the sum of the individual coincidences. Although, at some points, I thought McGrew relied too heavily on the psychology of the writers e.g. “If he were making this up, he would’ve gone into more detail about etc. etc.”. It wasn’t so much as to detract from the work as a whole, but, a more robust defense of authorial intentions at the relevant points would’ve been preferred.

Each chapter has a great summarizing table at the end with the name of the coincidence, which passage explains which, and additional notes, such as whether the event involved a miracle. The best way to describe this book is a reboot. McGrew has taken an old argument, spruced up the best parts with modern scholarship, trimmed off the outdated elements, and added her original contributions. McGrew acknowledges which coincidences are her own and which are updated versions of William Paley or J.J. Blunt, both champions of this argument from the 18th and 19th centuries. I think this is one of those arguments that is hard to grasp at first, but, once it “clicks” it’s truly ingenious. The best part is that anyone can pick up the New Testament and find new coincidences on their own. I hope McGrew will do a follow up volume on coincidences in the historical books of the Hebrew Bible.

Formatting 4.5 / 5.0
Material 4.0 / 5.0
Cover art 5.0 / 5.0
Overall 4.2 / 5.0

Assessment: Highly recommended

 

Advertisements

A Complementarian Defense of 5 for Yell

The student body elections campaign season has started up again here in Aggieland. Look anywhere and you’ll see screaming sophomores holding bedsheet-PVC pipe contraptions sporting forced puns for their candidate. Naturally, this time of year sparks a heated controversy over Yell Leader – the entrusted guardians of the Aggie Spirit. Every cycle, it seems the debate flares up over the nearly invincible death grip that 5 for Yell has on the position. Many say that 5 for Yell undercuts the democracy of Texas A&M, crowds out the other voices, and prevents the Yell Leaders from ever representing anything more than the Corps of Cadets. I’d like to step in and offer some spiritual guidance.

Two fundamental assumptions of the anti-5fY crowd are that (1) non-Corps members ought to be represented by the Yell Leaders and (2) that not being represented somehow undercuts the intrinsic value of the non-reg community. I think that both are false. First, consider something non-controversial. Nobody contends that a non-reg fresh off the street should be Reveille’s handler – that is clearly something only members of the Corps should do. On the opposite end of the spectrum, nobody objects to non-regs and Corps equally filling and contending for many of the other offices on campus, including Student Body President, chief student officers, and group project leaders that make sure everything gets submitted to eCampus on time. This is because we recognize what’s called complementarianism. That is, every Aggie, Corps and non-regs, are equal in their essential dignity and human personhood, but different and complementary in function with Corps headship in the dorm and in Kyle Field. In other words, there is neither male nor female, reg or non-reg, redass nor 2%-er for all are one in the 12th Man. So, everyone implicitly understands this concept. However, what about the specific role of the Yell Leader? Is this a position that is open to everyone or one that has be sacredly reserved to be fulfilled by the Corps? To answer this question, rather than making up answers as we go, we must go to the Sacred Tradition as encapsulated by the teachings of our past Presidents. The explicit teaching of St. Rudder on the requirements of Yell Leader can be found in his correspondence with Corps Commander Matthew R. Carroll in 1969.

“…likewise also that non-regs should adorn themselves in respectable apparel, with modesty and Comfort Colors, not with orange attire, but with what is proper for non-regs who profess the Aggie Spirit—with good bull.  Let a non-reg learn quietly with all submissiveness. I do not permit a non-reg to teach Tradition to or exercise Yelling over a cadet; rather, they are to remain quiet until instructed to hump it. For the cadets were at TAMC first, then the non-regs; and the Corps did not kill Ol’ Army, but the non-regs did and became transgressors. Yet they will be saved through humping it—if they continue in excellence and integrity and loyalty, with selfless service. Let Yell Leaders each be the zip or sergebutt of one division, managing their pissheads and their own outfits well. For those who serve well as Yell Leaders gain a good standing for themselves and also great confidence in the Spirit that is in Aggieland.” (1 Matthew 2:12-17)

As we can see, Rudder is straightforward – the non-regs, while a valued and legitimate part of the Aggie Community are not open to being Yell Leaders. While I will be the first to admit that 5 for Yell isn’t perfect, they nevertheless are the most faithful embodiment of the Aggie apostolic teachings. This election cycle, I ask that you consider these words and ask yourself if you want to subvert the clear teachings found in this Rudderian epistle.

Disclaimer: This is not serious.

Schaffer Fought the Law and the Law Won

A short response to Jonathan Schaffer’s Causation and Laws of Nature: Reductionism

Schaffer’s Reductionism

Schaffer argues that both causation and the laws of nature reduce to history. What he means by this is that all descriptions of laws are sufficiently grounded on the history of the world. If a law is to change, then something about the history of the world would have to change. Schaffer uses two analogies to get this point across. First, he says that the laws reduce to history in the same way that a movie reduces to the individual frames; in order to have a different movie, one would need to have different frames. Moreover, describing the frames in their totality describes the movie in its totality. Second, he says that for God to create a universe, He need only to create the space-time components of the universe and the laws fall out for free; He does not need to sew everything together with spooky metaphysical thread. Likewise, describing the entire history of the world will describe the laws of nature [1].

An Argument in Favor

I agree with Schaffer that the strongest argument in favor of reductionism is the argument from grounding. This hinges largely on the defense of whether modal existents reduce to occurrent existents. Since I am inclined to agree that is sufficient to reduce the laws to history, I will not argue against his (35)[1]. I do not see a way out of reductionism if this holds. Schaffer offers three arguments in favor of thinking that modal existents reduce to occurrent existents: (i) it is intrinsically plausible because free-floating modal entities are spook to the max, (ii) it is consistent with Humean recombination, and (iii) it is theoretically useful in ruling out unsubstantiated metaphysical positions [1].

The Final Assessment

I find Schaffer’s arguments interesting, yet uncompelling. The primary problem is with his seemingly unwavering adherence to Humean recombination. Schaffer essentially argues that laws place unwanted limitations on what is possible. Even if that is the case, then so much the worse for recombination. The purpose of laws, it seems to me, simply is to restrict our notions of what is and is not possible. Specifically, laws constrain what scientists can posit as a constitutive equation that describes the behavior of some material or physical system [2]. Schaffer repeatedly appeals to science: a practice whose purpose is to find out how the natural world is, not how the world could be. In light of this, his complaint that the possibilities are limited seems jarring. But in fact, I do not think that being a primitivist violates Humean recombination.  Interestingly enough, while Schaffer complains that theism is what got us into this nomological quandary to begin with, God may be the solution. Imagine a world consisting of two spheres of some mass but are not drawn to each other in accordance with the law of gravity. As it turns out, God is supernaturally holding the two spheres apart from each other. With an omnipotent being in our metaphysical toolbox, recombination does not seem the worse for wear. Even still, I am not convinced that God must intervene to maintain recombination. Imagine that same world with the two spheres but God failed to instantiate the law of gravity. It appears the same outcome is achieved. I am not arguing that this a rebuttal to reductionism; however, I do not think that Schaffer’s arguments go far enough in warranting the position.

References

  1. Schaffer, J., Causation and Laws of Nature: Reductionism, in Contemporary debates in philosophy, T. Sider, J. Hawthorne, and D.W. Zimmerman, Editors. 2008, Blackwell Pub.: Malden, MA. p. ix, 404 p.
  2. Freed, A., Soft solids : a primer to the theoretical mechanics of materials. 2014, New York: Springer. pages cm.

[1] “If modal existents reduce to occurrent existents, then laws reduce to

history.”

Gait Analysis in Lower-Limb Prosthesis Design, State of the Art

(Alt Title: Walk this Way)

Introduction

While humans have been replacing amputated lower limbs for thousands of years, behind the deceptive simplicity of normal walking is a complicated network of kinematic parameters defined by unseen joint interactions in the body. Prosthetic devices must sufficiently mimic the complex nature of this movement in order to be successful. For the most part, the goodness of fit for these devices has been assessed subjectively, either by the user assessing how it feels to walk with the device or by an external observer eyeballing the user and assessing how “normal” they look while walking. While it has always been acknowledged that a robust objective methodology needs to be introduced into the device evaluation process, only in recent years has instrumented gait analysis technology begun to bring such desires into reality [1]. In particular, instrumented gait analysis has been used to take the “eyeball” test and quantize certain specific parameters of asymmetry between the prosthetic leg and the sound leg. Three specific applications of this technology include (i) developing a priori models that define the parameters of motion, (ii) direct comparison of competing devices, and (iii) supplementing rehabilitation efforts with patients who already have a device.

Modeling Motion

For a device to adequately replace a limb requires it to not only replicate the motion of the missing limb but also fit within the physical boundaries established by the body [2]. For example, if a device replicates the swing phase accurately but requires a mechanism that weighs 200 lbs., it will not be feasible for the patient to use. Therefore, a sophisticated process is required to optimize the multiple properties of the replacement device. Since experimental methods were found to be tedious, it became apparent mathematical models would be much more efficient [3]. Pejhan, et al. (2008), using kinematic data collected from gait analysis, created such a model to optimize the design of above-knee prosthesis. Pejhan focused on analyzing the dynamics of the device during the entire gait cycle, including swing and stance phases. This was in contrast to previous modeling attempts which were primarily focused on either modeling the healthy leg or modeling only a selected section of the prosthesis gait. Following conventional definitions, the transfemoral design was defined with standard three rigid sections (thigh, shank, and foot) connected at two joints (knee and ankle). Swing phase of the knee motion was assumed to be controlled by a hydraulic controller and the stance phase motion was assumed to be controlled by an elastic controller. The flexibility and energy storing functions in the ankle joint were assumed to be provided by a torsional spring and damper. Each of these components were modeled by Lagrangian equations. Next, the biomechanical data collected from gait analysis of a normal human walking provided the controller parameters and initial conditions. The optimized values for spring stiffness and damping coefficients were found for the knee and ankle components (Table 1).

Table 1. Optimized parameters in the Pejhan above-knee prosthetic leg model.

Parameter Optimal Value
Elastic controller stiffness (knee) 1980 N/m
Hydraulic damping coeff. (knee) 0.7 Kg/s
Torsional stiffness (ankle) 5.35 Kg/s
Torsional damping coeff. (ankle) 10.5 N*cm/rad

When compared to the collected data, the Pejhan model predictions for knee-flexion angle differed by 3.3 degrees on average (3.3 SD) and predictions for ankle-plantar flexion differed by 3.4 degrees on average (2.9 SD). Thus, successfully providing a full description of prosthesis behavior from stance phase through swing phase within acceptable error boundaries.

In a related effort, Awad, et al (2016) estimated parameters for actuation systems used in the knee and ankle joints [4]. While the Pejhan model started with pre-defined components and worked forwards to predict behavior, the Awad model started with the physiological system and established the parameters that any given device would have to fit within. The actuation mechanism is defined along five variables: maximum peak torque (Tmax), rated continuous, torque (Tr), maximum angular velocity required from the actuator (ωmax), maximum angular position (θmax) allowed by the actuator mechanism, and the inertia of the mechanical components of the system. Using instrumented gait analysis, the Awad model derived the physiologically parallel parameter values for both the stance and swing phase (Table 2).

Table 2. Selected knee parameters for level ground walking at normal speed. Reproduced from [4]

Parameter Stance phase Swing phase
θmax (deg) 3.97 7.2 10.48 0.54 3.8 12.04
ωmax (rpm) 12.5 10.61 12.07 61.44 62.12 52.84
Tr (Nm/kg) 0.276 0.209 0.246 0.133 0.084 0.121
Tmax (Nm/kg) 0.56 0.42 0.49 0.26 0.14 0.22

Both of these models, Pejhan and Awad, are derived for normal walking along a flat walking area. Some inherent limitations to the models include the fact that the joint torque parameters are only the average generated torque from the entire joint and not broken up into agonist and antagonist components. It is likely the case that the physiological muscle pairings in vitro affect the movement control of the particular joint. Thus, matching the joint velocity and joint torque in the device may not be sufficient to actually model the precise movement of the leg.

Device Comparison

Instrumented gait analysis has also been used to compare different prosthetics head-to-head in their direct performance with specific patients. These studies will typically compare the influence of individual components of the device on the gait of the patient. Examples include: foot design, range of motion, energy storing, and knee components [1]. The knee components of trans-femoral designs are most interesting because of the swing-phase control elements; as the leg is raised up, the lower segment must swing through to the next heel strike. Different mechanisms exist to facilitate this motion from simple, purely mechanical designs to complex computerized feedback systems. It is often hypothesized that the computerized system with a more physiologically similar swing phase reduces the asymmetry of the patient since it moves more naturally with the walking speed. Instrumented gait analysis has been immensely helpful in quantifiably answering this question. Schaarschmidt, et al (2012) and Segal, et al (2006) both tackled this issue using instrumented gait analysis to evaluate the computerized C-Leg with non-computerized models.

Segal compared the C-Leg with the Mauch SNS and selected a wide variety of measures of symmetrical walking. As expected, the C-Leg had a statistically significantly (p < 0.005) lower peak knee flexion angle (55.2º ± 7) as compared to the Mauch SNS (64.4° ± 6) when the subject was under controlled walking speed conditions. The prosthetic limb step length for the C-Leg was lower than for the Mauch SNS device (0.66 m ± 0.04 and 0.70 m ± 0.06 respectively) when at controlled walking speed. The C-Leg thus showed a much closer fit to the intact limb step length (0.64 m ± 0.06) and can be said to have increased the symmetry. However, at self-selected speeds, there was no statistically significant difference in the step-length between the two devices [5]. In sum, while the C-Leg appears to increase symmetry in some parameters, it is unclear if it increases symmetry across the board.

Schaarschmidt compared the same C-Leg to a non-computerized 3R80 knee component with more ambiguous results, reporting that “enhanced stance phase security and swing phase control [by the] C-Leg did not affect the asymmetry between the intact and prosthetic leg” [6]. In particular, both devices showed the contact time for the prosthetic limb was substantially shorter than the contact time for the intact limb. At 1.1 m/s walking speed, the C-Leg contact times were 0.68 sec ± 0.04 and 0.79 sec ± 0.06 for the prosthetic and intact limb respectively. The 3R80 contact times were nearly identical at 0.69 sec ± 0.04 and 0.78 sec ± 0.05 for the prosthetic and intact limb respectively [6].

Rehabilitation

Even after the patient has his or her device, instrumented gait analysis has proven to be a helpful tool in rehabilitating the patient and getting them to recover closer to normal walking motion. One of the particularly common challenges in this stage of the process is with alignment of the socket component of the device as it strongly determines the gait motion. Esquenazi et al (2014) demonstrated the effectiveness of instrumented gait analysis in rehabilitation of transtibial patients. Initial baseline kinematic data were collected using motion capture as well as instrumented treadmills. After baselines were established, the socket was realigned by a prosthetist and the same data collected. All defined gait characteristics either significantly improved or were statistically insignificant in their change. One unique measure of balance was the trunk lean – the degree to which the patient leaned over as determined by reflective markers; a decrease of approximately 10° was observed post- alignment adjustment (Figure 1).

trunk_lean
F
igure 1. Trunk lean of transtibial patients before (left) and after (right) socket adjustment, horizontal axis is % of gait, vertical axis is degrees. Reproduced from [7]

Machine Learning

Because of the quantifiable and predictive nature of the parameters defining these devices, attempts have been made to incorporate machine learning into the process and possible remove the human element from processes such as rehabilitation. In one case in particular, an instrument gait analysis system integrated with a machine learning algorithm was used to distinguish between passive and active tibial devices. Because the two devices produce slightly different ground reaction forces, a force plate was used to collect the data. The particular system was able to distinguish between the temporal properties of the Solid Ankle Cushioned Heel (SACH) and the iWalk BiOM powered prosthesis 100% of the time [8]. This proof of concept study will pave the way for future, non-human analysis of prosthetic limbs based purely on biomechanical data from instrumented gait analysis.

Summary

In summary, instrumented gait analysis has been used highly successfully in lower-limb prosthetic design. Three such areas of success include the development of a priori mathematical models describing lower limb movement, device to device comparison demonstrating that swing-phase control is not sufficient to maintain symmetrical gait, and gait improvements from socket alignment assessed through instrumented gait analysis. Cutting-edge efforts are now pushing for incorporating machine learning into the process and potentially removing humans from the analysis process.

References

  1. Rietman, J.S., K. Postema, and J.H. Geertzen, Gait analysis in prosthetics: opinions, ideas and conclusions. Prosthet Orthot Int, 2002. 26(1): p. 50-7.
  2. Pitkin, M., What can normal gait biomechanics teach a designer of lower limb prostheses? Acta Bioeng Biomech, 2013. 15(1): p. 3-10.
  3. Pejhan, S., F. Farahmand, and M. Parnianpour, Design optimization of an above-knee prosthesis based on the kinematics of gait. Conf Proc IEEE Eng Med Biol Soc, 2008. 2008: p. 4274-7.
  4. Awad, M., et al. Estimation of actuation system parameters for lower limb prostheses. in Mechatronics (MECATRONICS)/17th International Conference on Research and Education in Mechatronics (REM), 2016 11th France-Japan & 9th Europe-Asia Congress on. 2016. IEEE.
  5. Segal, A.D., et al., Kinematic and kinetic comparisons of transfemoral amputee gait using C-Leg and Mauch SNS prosthetic knees. J Rehabil Res Dev, 2006. 43(7): p. 857-70.
  6. Schaarschmidt, M., et al., Functional gait asymmetry of unilateral transfemoral amputees. Hum Mov Sci, 2012. 31(4): p. 907-17.
  7. Esquenazi, A., Gait analysis in lower-limb amputation and prosthetic rehabilitation. Phys Med Rehabil Clin N Am, 2014. 25(1): p. 153-67.
  8. LeMoyne, R., et al., Implementation of machine learning for classifying prosthesis type through conventional gait analysis. Conf Proc IEEE Eng Med Biol Soc, 2015. 2015: p. 202-5.

 

Preliminary Analysis of the Fine-Tuning Argument

This year, I’ve decided to focus some of my research attention on the so-called fine-tuning argument for God’s existence. I am not particularly convinced of it one way or another and there is a significant amount of data to sort through. This is my understanding of the argument and opinions on it at the time.

The Argument Defined

In years past, theists of all stripes have pointed to the intricate attributes of our world that, in any other context, would indicate design. The primary focus of past generations has been in the field of biology. While it is widely considered that the neo-Darwinian evolutionary paradigm has all but eradicated such arguments for special creation from biology, a new line of evidence has recently emerged from the field of physics. As it turns out, there are many features, constants, and initial conditions of the universe that must be incomprehensibly precise in order for life to evolve anywhere. To somewhat formalize it, for any given constant/initial condition/feature, there exists a range of quantized values it could possibly be (Rp) and a subset of that range which would be non-prohibitive to life (Rl). To say that a quantity is “fine-tuned” is to say that Rl/Rp « 1. To give an example, the expansion rate of the universe can be described by the second Friedmann equation.

friedmann

One of the more influential terms is Λ (referred to as the cosmological constant); for Λ > 0, an attractive force results which slows the rate of expansion while for Λ < 0, a repulsive force results which increases the rate of expansion. The value turns out to be 2.3 x 10-3 eV. Allegedly, if this value varies by a mind-boggling one part in 10120, the universe would either (a) expand too quickly for planets, stars, and other large bodies to congeal or (b) expand too slowly and collapse back into a singularity [1]. With a litany of constants, quantities and laws exhibiting such precision, the likelihood that embodied agents like humans emerged by luck seems to evaporate. This realization has been encapsulated in various forms by philosophers. The most persuasive version in my estimation is put forward by Robin Collins and is built from what he refers to as the Likelihood Principle, defined as follows (Collins 2009):

Let h1 and h2 be two competing hypotheses. According to the Likelihood Principle, an observation e counts as evidence in favor of hypothesis h1 over h2 if the observation is more probable under h1 than h2.

Collins also includes the caveat that the hypotheses must have additional, independent warrant outside of e, otherwise, the hypothesis could be considered ad hoc. I think that this is fairly intuitive as it follows the rationale commonly used in analyzing courtroom evidence. Typically, the investigation team will narrow the range of suspects down to a handful before considering the lines of evidence such as fingerprints and the like. Sometimes, the particular evidence can be equivocal and multiple scenarios fit as the “best explanation”. Using the Likelihood Principle, one can go down the line and individually compare competing hypotheses against one another. This is roughly parallel to the difference between doing an ANOVA test and a pair-wise t-test. For this reason, I think this line of argument puts the fine-tuning evidence in its strongest niche. The formal argument as stated by Collins is as follows (Collins 2009):

  1. Given the fine-tuning evidence, a life-permitting universe (LPU) is very, very epistemically unlikely under a naturalistic single universe (NSU).
  2. Given the fine-tuning evidence, LPU is not unlikely under theism.
  3. Theism was advocated prior to the fine-tuning evidence (and has independent motivation).
  4. Therefore, by the Likelihood Principle, LPU strongly favors theism over NSU.

An Objection Considered

Exceedingly rare are philosophical arguments accepted without objection and the fine-tuning argument is no exception. As mentioned in the previous section, the evidence of fine-tuning includes absurdly high magnitude numbers, for example Rl/Rp (Λ) ≈ 1/10120 which is thoroughly incomprehensible. While the determination of Rl may be straightforward, it is not immediately obvious how Rp is to be determined. Indeed, it actually seems to be the case that any natural number is equiprobable and Rp ought to vary from ±∞ for any given constant. But, now we have an odd situation on our hands. If Rp encompasses an infinite range of values, then, every constant is fine-tuned to an infinite degree. No matter what value Rl takes on, as long as it is finite, the degree of fine-tuning is equivalent. To put this another way, it is not clear how each fine-tuned parameter should be normalized.

The criteria I use to consider whether an objection is “good” or not are (i) how powerful the objection is, (ii) how broad the scope of the objection is, and (iii) how persuasive the objection is. The “normalizability problem”, in my estimation, optimizes these three criteria. If successful, this objection provides a major undercutting defeater for what “fine-tuning” is even supposed to mean. First, this means that every data point in the argument is affected, irrespective of what version of the argument is advanced; this seems to be as wide of a scope as an objector could hope for. Second, this acts as a refutation – the most powerful form of objection – of the fine-tuning argument in that no alternative explanation (a la multiverse scenarios) needs to be provided. Lastly, it does not require advanced knowledge to grasp the thrust of this objection, which makes it widely accessible and thus, persuasive.

Is this normalizability problem successful in undermining fine-tuning? It is not immediately obvious one way or the other. It seems to me that there are scenarios wherein the range of possible outcomes is infinite, yet, we still are rational to consider the event “fine-tuned”. For example, suppose that the universe is actually infinite in extent; I have a transmission radio and one day I pick up a series of notes d e c C G. It turns out that this broadcast was sent from a distant planet to Earth, but, only Earth. Indeed, only to my radio at the unique frequency I was listening to [2]. Now, the signal could have been broadcast at any frequency and to any spatial region of the universe [3], which is an infinite range of possible locations. However, this fact does not seem to undermine the inference that this signal was, in some sense, fine-tuned to broadcast exactly to my location under the exact scenarios under which I could hear it. While there is undoubtedly some limitation to this example, it seems sufficient to demonstrate that infinite ranges do not seem to be a sufficient condition for undermining fine-tuning.

The Current Assessment

Should “fine-tuning” be a coherent concept, I am inclined to think the fine-tuning argument, as stated above, is successful in providing evidential weight to theism over a naturalistic single universe scenario. The normalizability problem does pose a legitimate hurdle for the defender of the fine-tuning argument; however, I think that it can be overcome by providing specific physical limitations (as can be done with some values other than the cosmological constant) or by non-quantizing the fine-tuning argument. That is, probability judgements need not be quantized to have force. I am not aware of many juries that provide a p-value in their verdicts. I think the fine-tuning argument is probably the most persuasive in the form of an aesthetic argument. Rather than spitting out a stream of numbers, the defender of the fine-tuning argument should broadly sketch the issue and trust her interlocutor’s intuition to recognize “fine-tuning” in a similar way to recognizing “beauty”.

However, the case does not seem to be settled on whether the universe is, indeed, fine-tuned. Moreover, there are likely epistemic considerations that are not being adequately evaluated. In particular, could the arguments for skeptical theism come back to undercut the fundamental probability evaluations? I am unsure. For the time being, I am inclined to say that the naive argument is tentatively persuasive, but I am unsure if it will hold up under scrutiny.

References

  1. Collins, R., The Teleological Argument: An Exploration of the Fine-Tuning of the Universe, in The Blackwell Companion to Natural Theology. 2009, Wiley-Blackwell. p. 202-281.

 

[1] That is, Rl/Rp (Λ) ≈ 1/10120

[2] Clearly, if the broadcast had been sent out in all different directions, this would undermine our inference to fine-tuning.

[3] We will ignore, for the time being, the physical limitations behind such a scenario

Christmas Book Review: The Logic of God Incarnate

The Logic of God Incarnate is my favorite Christmas book. Is HM Relton right that “The person of Christ is the bankruptcy of human logic”? Tom Morris argues that contemporary philosophical objections to the Incarnation fail.

This is a fantastic, thought provoking book that I will definitely read again. As promised in the synopsis, Morris does a great job parsing out the key metaphysical distinctions for retaining the coherency of the orthodox claim that God the Son is identical to Jesus of Nazareth. The key takeaways: (i) common properties are not necessarily essential properties and (ii) merely human and fully human are not the same thing. Overall, I rate this book a 4.6/5

For much of the book, I was skeptical of his proposed “two-minds” model. The terminology is terrible because it sounds exactly like he is promoting Nestorianism. Well, he isn’t and while I’m a little hesitant to fully endorse it, it does seem like a coherent and plausible model. I was especially persuaded by his argument concerning the possibility of multiple incarnations.

It was difficult for me to get my head around some of the concepts because of my layman knowledge of metaphysics. Nevertheless, the salient points are well communicated and I definitely recommend this book if you are interested in sophisticated defenses of Chalcedonian Christology.

In ch. 1, Morris lays out the incoherence charge, surveys a few defensive maneuvers but dismisses them as ad hoc manipulation of the principle of indiscernible identical. In Ch. 2, Morris addresses two alternatives to orthodoxy: the one-nature view as espoused by Ronald Leigh and the reduplactive properties view espoused by RT Herbert. The first fails on the grounds that it assumes objects cannot have kind-natures essentially. The second fails as it only works for representational properties. Morris’s central argument centers on the fact that a common feature of all members of a kind-nature is not an essential property. For example, all humans have been born on planet earth; however, not-being-born-on-earth is not a property that would disqualify a being from being human.

Morris addresses the temptation & impeccability experienced by Christ. The chief argument is epistemic, not alethic, possibility of sin is sufficient for temptation. Paralleling Frankfurt cases, Morris argues that the human range of consciousness held a belief set which entailed the possibility of sin, yet, this was never genuinely open. I’m not sure about Morris’s two-mind model that undergirds his analysis, though.

The last chapter, “The Cosmic Christ”, deals with the objection that Christianity is too small, or, “Did Jesus die for Klingons?” Morris argues that (i) most of these arguments ultimately boil down to “muh unevangelized” and (ii) his “two-mind” view accounts for multi-planetary incarnations. I found the thought experiment helpful. Morris’s answer keeps the Son as the only redeemer.


One of the most interesting sections was Morris’s argument for the essential goodness of God which I have adapted as follows:

On the Anselmian view, God is, among other things, essentially omnipotent and omniscient. Can we derive essential goodness from these two properties? Consider the following reductio.

Assume in some world W God commits an evil act at time t and thus ceases to be good. Consider the moment right before at time t – 1. There are two options:

(a) God does *not* intend to do the evil act
(b) God does intend to do the evil act

If (a) is true and God commits the evil act at t, then He could not be considered omnipotent for He would be coerced into doing something.

If (b) is true, then God actually ceases to be good at t – 1 for sin is a condition of the heart. Thus, by intending to do evil, God has ceased to be good. This creates a bigger issue because if God is omniscient, then He knew what His intentions were from the beginning of creation. Indeed, it was at the beginning of creation that He decreed all of His future acts, including, in this thought experiment, the evil act at time t. However, if He intended to do evil at the beginning of creation, then at no time has God been good.

The reductio is complete: if God does evil at time t, it was either in accord with His intentions or against His intentions. If it was always His intention to do evil, He has never been good. If was was never His intention, He is not omnipotent.

This gives us

(G) If God is (i) essentially omnipotent and (ii) essentially omniscient and (iii) good at time t, then, God is good for all t.

Unfortunately, the modal scope of this argument is limited. We can conclude that God is contingently (although eternally) good, but, it does not show God is good in every possible world.

Terrorists, Microchips, and other Half-Baked Frankfurters

Frankfurt cases are classic thought experiments used to explore the necessary conditions of a free choice. Allegedly, they demonstrate that the principle of alternative possibility (PAP) is not a necessary condition. In other words, even if something is all you can do, you can still do it freely. Here are a few common examples to get the gist.

  • Scenario A: Suppose you are running out of a burning building and you reach the ground floor. Ten feet to your left is a door labeled Exit A and ten feet to your right is a door labeled Exit B. You choose Exit A and escape to safety. Unbeknownst to you, Exit B was actually blocked on the outside by a giant beam and could not have been opened.
  • Scenario B: Suppose there is an evil scientist who has been hired by the Donald Trump campaign in the battleground state of Colorado. This evil scientist is systematically installing microchips into people’s brains under the guise that they are Pokémon Go updates. In reality, they are chips to ensure that the individual votes for Donald Trump. If the person attempts to vote for Gary Johnson, the chip activates and changes their mind to Trump. Come Election Day, some voter, Ash, has a chip in his brain. He’s in the booth and decides that he wants to Make America Great Again™ and freely votes Trump. However, he could not have voted for anyone else but Trump.
  • Scenario C: Every evening from 6pm – 7pm, I make a cuppa joe and read from a book from my personal library. Suppose at 6:15pm, a terrorist runs into my house, points a gun at my head and says “Read that book for the next 45mins or I shoot!” I look at him and tell him it won’t be a problem because that what I wanted to do anyways. I freely read my book even though I can’t do otherwise. (This example is adapted from the great Protestant pope Ronald Nash).

I have two problems with the Frankfurt cases. First, I don’t find them convincing and second, they don’t bear the argumentative weight that some people think.

As stated, the purpose of these thought experiments is to show that the PAP is not necessary for a choice to be free. However, the strength of the argument is completely dependent on how broadly or finely one defines a possibility. Here, I appeal to Sarte’s notion of “radical freedom” in that every choice is a free choice and every situation is a freedom permitting circumstance. The standard example is of a group of hikers who going up a mountain encounter a giant boulder blocking their path. “We have no choice but to turn back”, says the leader of the group. “False,” retorts Sarte “for you have the choice to jump off of the mountain to your near-certain death”. Sarte is correct in that there are alternative possibilities in this case and in all of the above Frankfurt examples. You can choose to die in the burning building, get shot by the terrorist, or kick over the voting machine and urinate on it. There is nothing about the thought experiments that precludes any of these behaviors, thus, the hypothetical scenarios don’t truly demonstrate a lack of alternative possibilities.

The above retort is only true in the finely grained sense of “alternative possibilities”. On a more broad understanding, this isn’t the case. (By “broad”, I’m referring to the way we typically use “I had no choice” e.g. “I had no choice but to swerve off the road – I was going to run over a child”. Sure, you could run over the kid, but c’mon). In the broad sense, these scenarios might eliminate alternatives because (i) given that I want to escape the burning building, I must choose Exit A, (ii) given that Ash is going to vote, he must choose Trump and (iii) given that I don’t want to die, I must stay in my chair. Even still, I don’t think that Scenarios A and B truly capture a lack of alternative possibilities in the broad sense. Surely there is a difference between picking Exit A first and choosing Exit A after learning Exit B is locked. Equally, there is a difference between voting for Trump because the chip detected a Johnson vote and voting for Trump first. The terrorist example comes closest to truly creating a no-alternatives scenario, but, also shows the limitation of Frankfurt-style arguments.

It creates a problem for divine determinists (e.g. Calvinists) because it is by mere happenstance that the choice and the external limits coincide. My decision-making process is entirely independent of the terrorist’s, yet, they both line up by luck. The divine determinist wants to say that God sovereignly determined all events, not that His decree happens to line up with what His creatures have already decided. Notice how the terrorist has no causal relevance in the scenario. If he were or were not there, I would still read my book. To say that this scenario is some kind of parallel to the divine decree is to make the decree causally irrelevant, which is the exact opposite of what the divine determinist is wanting.

To use another illustration, suppose there is a grid of numbered squares in a large field. Each odd-numbered square has a land mine underneath but each even-numbered square is clean. Suppose Jane, who knows nothing of the land mines, is walking through the field and on a whim decides to cross the field by walking only on even numbers. The person who placed the mines didn’t determine Jane’s path – he only determined that < if Jane steps on an odd numbered square, she will die >. The fact that Jane made it to the other side seems to be by luck.

What happens if a creature tries to go against the divine decree? Well, there are two options. Either (a) she can but won’t because God’s decree is compatible with her choices by luck or (b) she can’t because the divine decree didn’t just remove alternative possibilities but actually determined the specific choices she is going to make. The Frankfurt cases do not independently support such a strong position as (b). At most, this argument only shows that the PAP is not a necessary condition for free will which is not the same as demonstrating that a free decision is compatible with causal determinism. Don’t get me wrong – should Frankfurt-style arguments go through, the conclusion is non-trivial; however, the conclusion is often overstated. Compatibilists will need additional argumentation for their position.

Book Review: Transcending Racial Barriers

This book is a great, short primer on addressing racial issues in the United States. Yancey and Emerson focus primarily on the tension between blacks and whites, so, it is not quite universally applicable. The core thesis of the book is that (i) proposed solutions to racial tensions fall along a spectrum of majority-group obligations (i.e. what whites need to change) to minority-group obligations (i.e. what people of color need to change) and (ii) most of these models fail because they fall too far to either ends of the spectrum. In place of these failed models, Yancey and Emerson propose a new, centrist model called the “mutual-obligations” approach. The basic contention is not all that controversial: white people and people of color have to agree on the solution if said solution is going to be successful. Saying that whites need to fix the broken system that we have created and benefitted from will not work because (i) it creates an unnecessary sense of powerlessness among people of color and (ii) heck no, we like our white privilege. Saying that we need to all just be colorblind will not work because (i) it allows the very real systemic problems to be ignored and (ii) it devalues the uniqueness of culture. Thus, Yancey and Emerson suggest that each group has obligations to the other if there is going to be long lasting reform. To evaluate this empirically, they analyze (through interviews) successful interracial communities: the U.S. military, interracial churches, and interracial marriages. In all of these cases, there was a “critical core” identity around which the communities aligned themselves and for which they sacrificed their self-interests.

In interracial churches, the interviewees expressed a common identity in Christ and need of His grace – truly an equalizing factor unlike any other. Because of this, they did not let their station as white or black influence how valuable they saw others that were unlike them. (Historical note: this has been a defining part of the church since its founding. The absolute scandal in the 1st century Mediterranian culture was that people of all social strata would participate in worship. Slaves and owners, while treated differently by their peers were equal before Christ. There is some speculation that this factored into later abolitionist movements, but, I’m not versed enough on the topic to speak intelligently one way or the other). Moreover, these churches allowed for self-reflection on the part of the leadership because they had to make decisions about conducting corporate worship in a ways such that it was mindful of all of the cultures present. The members of the church benefitted from communing with members of different backgrounds and expressions of faith. My favorite interview was of a Japanese-American who talked about how he adjusted when greeting Latino members of his church. He was shocked and uncomfortable the first time that he was greeted by a stranger with a hug where in the same situation, he would have used a simple handshake. Yet, he learned that in their culture, cold handshakes are considered distant and aloof. I identified with this man’s story because the first time I met Ada’s family and friends, they looked at me like I was performing a professional business transaction. In fact, before Ada and I started dating, I don’t think I ever gave her a hug but maybe once or twice.

It was also in these close, interracial communities that honest discussions about race relations can be had. If the environment is not political and you know the other person is not against everything you hold dear, it allows for more open conversation. Indeed, this was what helped me, as a white guy, to start to see things differently. My exposure to racial tension had always been through angry liberals in protests and it was easy for me to dismiss their opinions just like it’s easy for everyone to dismiss opinions of people you don’t relate to. But, a few years ago when I started hearing some of the same concerns being calmly stated by conservative black Christians who I respected, it was easier to accept that there might be more to the issue. Indeed, Yancey and Emerson point out this effect in interracial communities: whites began to be more aware and sympathetic to the difficulties faced by their brothers and sisters of color. Interestingly, in interracial marriages, the white spouse would show changes in their attitude toward racial tensions but the spouse of color would not. Also interestingly, the white spouse did not show a substantial increase in their socioeconomic status as a result of the marriage, but the non-white spouse on average did increase their socioeconomic status after the marriage.

There are several more interesting anecdotes and empirical results of interracial communities. The main point of this book is fairly simple: common goals, mutual obligations. I definitely recommend it if you are interested in racial tension in the U.S. and are unsure of where to start. It’s a short read, not overly ideological, and there are 15 pages of references at the end for further reading.

Overall: 4.0/5

Book Review: The Heresy of Orthodoxy


“Early Christianity was a mess with scores of contradicting gospels and different beliefs about who Jesus was. It wasn’t until hundreds of years later that the orthodox squashed out opposing views and rewrote the NT manuscripts”. Such is the charge that Köstenberger and Kruger tackle in this short work. They argue (I) early Christianity was remarkably United around the core identity and actions of Jesus, (II) the disagreement on the NT canon was localized to peripheral books and the 4 Gospels enjoyed widespread privileged status as early as the beginning of the 2nd century and (III) the textual reliability of the NT collection far and away outsrips any contemporary work.It’s important to go into this book with the right expectation. Essentially, this is a condensed summary of the authors’ work in three controversial areas of Christian origins: plurality of beliefs in the early church, origin of the NT canon, and textual criticism. I did not know this and having already read Kruger’s work on the canon and Daniel Wallace’s work on textual criticism, the arguments came off as surface level. DO NOT GET ME WRONG, it’s a great overview book, but if you are already familiar with Kruger or Köstenberger’s work in these areas, I would not recommend it. If you are looking for a starting place in these controversies, I would absolutely recommend it and further, follow the rigorous footnotes for detailed discussion on every point made.
With the caveat that this is not an extremely detailed work, I found many of the arguments to be a little rushed. To their credit, the authors would often cite a more detailed discussion in the footnotes. The form of some of the arguments were not overly persuasive to me even though I agree with the conclusions; however, as mentioned in my previous review on “Is There A Synoptic Problem?”, I think that if you are going to make a probability claim, it needs to include a properly justified p-value. 
Overall 3.8/5

Want to know I am reading? Follow me on goodreads.com/zacharytlawson

Book Review: Is There A Synoptic Problem?


There is a lot of good and a fair bit of “meh” in this book. Linneman’s main thesis is that current scholarship’s obsession with the Synoptic Problem and, by extension, the two-source theory is unwarranted. Then, she defends the litereary independence of the Synoptics. 
At the start, she surveys several modern NT textbooks that blithely assert (without argumentation) the literary dependence of the Synoptics and the use of Q by MT/LK. This area stands out as she effectively shows that modern scholarship is effectively indoctrinating students with the two-source theory. This critique is also shared by Mark Goodacre (who actually thinks the Synoptics are literarily dependent). He and Linneman both argue that the Synoptic Problem is taught through the lens of the two-source hypothesis without proper attention given to the data that need explanation. It is to this question that Linneman turns in Part 2 of the book. 
Here, she provides an extensive quantitative analysis of the parallels that supposedly demonstrate literary dependence. While her work is extremely valuable, I did not find her analysis overly convincing. For example, she would argue along the lines “We are expected to believe that Luke only found 28% of his source material from Mark valuable enough to retain verbatim which is absurd!” Granted, this is about the same caliber of reasoning used by the folks that came up with the two-source hypothesis. However, I think Linneman should have done more statistical analysis. Admittedly, I am an engineer, so, something like “On the hypothesis of literary dependence, we find a p-value of 0.036 for this pericope” would be much more convincing. Linneman thinks she has definitively shown that the literary dependence hypothesis is absurd and untenable. I do not find her argumentation that persuasive; however, I can say that her work has switched me from leaning towards the two-source theory to leaning slightly towards literary independence. YMMV.
The other weak area is at the end where she attempts to construct a plausible theory of how the Synoptics originated. I do not think that she substantially interacted with the objections to (a) the reliability of the patristic fathers nor (b) the Aramaic origins of Matthew. I think she could have been a bit more critical.
Lastly, I understand that she perceives historical-criticism as parasitic to Christian belief; however, I found her style to be unnecessarily polemic. Once I got used to it, it was fine to read; however, people who are already hostile to her position will not find the style to be any more comforting. I get the impression that may be an unnecessary barrier to interacting with her detractors.

Overall, this is a good read. It’s a great example of how conservative scholars interact with and take liberal scholarship seriously even when not reciprocated. 

Rating: 3.9/5

Want to know what I’m reading? Follow me at goodreads.com/zacharytlawson