Archive | Fine-Tuning RSS for this section

Preliminary Analysis of the Fine-Tuning Argument

This year, I’ve decided to focus some of my research attention on the so-called fine-tuning argument for God’s existence. I am not particularly convinced of it one way or another and there is a significant amount of data to sort through. This is my understanding of the argument and opinions on it at the time.

The Argument Defined

In years past, theists of all stripes have pointed to the intricate attributes of our world that, in any other context, would indicate design. The primary focus of past generations has been in the field of biology. While it is widely considered that the neo-Darwinian evolutionary paradigm has all but eradicated such arguments for special creation from biology, a new line of evidence has recently emerged from the field of physics. As it turns out, there are many features, constants, and initial conditions of the universe that must be incomprehensibly precise in order for life to evolve anywhere. To somewhat formalize it, for any given constant/initial condition/feature, there exists a range of quantized values it could possibly be (Rp) and a subset of that range which would be non-prohibitive to life (Rl). To say that a quantity is “fine-tuned” is to say that Rl/Rp « 1. To give an example, the expansion rate of the universe can be described by the second Friedmann equation.

friedmann

One of the more influential terms is Λ (referred to as the cosmological constant); for Λ > 0, an attractive force results which slows the rate of expansion while for Λ < 0, a repulsive force results which increases the rate of expansion. The value turns out to be 2.3 x 10-3 eV. Allegedly, if this value varies by a mind-boggling one part in 10120, the universe would either (a) expand too quickly for planets, stars, and other large bodies to congeal or (b) expand too slowly and collapse back into a singularity [1]. With a litany of constants, quantities and laws exhibiting such precision, the likelihood that embodied agents like humans emerged by luck seems to evaporate. This realization has been encapsulated in various forms by philosophers. The most persuasive version in my estimation is put forward by Robin Collins and is built from what he refers to as the Likelihood Principle, defined as follows (Collins 2009):

Let h1 and h2 be two competing hypotheses. According to the Likelihood Principle, an observation e counts as evidence in favor of hypothesis h1 over h2 if the observation is more probable under h1 than h2.

Collins also includes the caveat that the hypotheses must have additional, independent warrant outside of e, otherwise, the hypothesis could be considered ad hoc. I think that this is fairly intuitive as it follows the rationale commonly used in analyzing courtroom evidence. Typically, the investigation team will narrow the range of suspects down to a handful before considering the lines of evidence such as fingerprints and the like. Sometimes, the particular evidence can be equivocal and multiple scenarios fit as the “best explanation”. Using the Likelihood Principle, one can go down the line and individually compare competing hypotheses against one another. This is roughly parallel to the difference between doing an ANOVA test and a pair-wise t-test. For this reason, I think this line of argument puts the fine-tuning evidence in its strongest niche. The formal argument as stated by Collins is as follows (Collins 2009):

  1. Given the fine-tuning evidence, a life-permitting universe (LPU) is very, very epistemically unlikely under a naturalistic single universe (NSU).
  2. Given the fine-tuning evidence, LPU is not unlikely under theism.
  3. Theism was advocated prior to the fine-tuning evidence (and has independent motivation).
  4. Therefore, by the Likelihood Principle, LPU strongly favors theism over NSU.

An Objection Considered

Exceedingly rare are philosophical arguments accepted without objection and the fine-tuning argument is no exception. As mentioned in the previous section, the evidence of fine-tuning includes absurdly high magnitude numbers, for example Rl/Rp (Λ) ≈ 1/10120 which is thoroughly incomprehensible. While the determination of Rl may be straightforward, it is not immediately obvious how Rp is to be determined. Indeed, it actually seems to be the case that any natural number is equiprobable and Rp ought to vary from ±∞ for any given constant. But, now we have an odd situation on our hands. If Rp encompasses an infinite range of values, then, every constant is fine-tuned to an infinite degree. No matter what value Rl takes on, as long as it is finite, the degree of fine-tuning is equivalent. To put this another way, it is not clear how each fine-tuned parameter should be normalized.

The criteria I use to consider whether an objection is “good” or not are (i) how powerful the objection is, (ii) how broad the scope of the objection is, and (iii) how persuasive the objection is. The “normalizability problem”, in my estimation, optimizes these three criteria. If successful, this objection provides a major undercutting defeater for what “fine-tuning” is even supposed to mean. First, this means that every data point in the argument is affected, irrespective of what version of the argument is advanced; this seems to be as wide of a scope as an objector could hope for. Second, this acts as a refutation – the most powerful form of objection – of the fine-tuning argument in that no alternative explanation (a la multiverse scenarios) needs to be provided. Lastly, it does not require advanced knowledge to grasp the thrust of this objection, which makes it widely accessible and thus, persuasive.

Is this normalizability problem successful in undermining fine-tuning? It is not immediately obvious one way or the other. It seems to me that there are scenarios wherein the range of possible outcomes is infinite, yet, we still are rational to consider the event “fine-tuned”. For example, suppose that the universe is actually infinite in extent; I have a transmission radio and one day I pick up a series of notes d e c C G. It turns out that this broadcast was sent from a distant planet to Earth, but, only Earth. Indeed, only to my radio at the unique frequency I was listening to [2]. Now, the signal could have been broadcast at any frequency and to any spatial region of the universe [3], which is an infinite range of possible locations. However, this fact does not seem to undermine the inference that this signal was, in some sense, fine-tuned to broadcast exactly to my location under the exact scenarios under which I could hear it. While there is undoubtedly some limitation to this example, it seems sufficient to demonstrate that infinite ranges do not seem to be a sufficient condition for undermining fine-tuning.

The Current Assessment

Should “fine-tuning” be a coherent concept, I am inclined to think the fine-tuning argument, as stated above, is successful in providing evidential weight to theism over a naturalistic single universe scenario. The normalizability problem does pose a legitimate hurdle for the defender of the fine-tuning argument; however, I think that it can be overcome by providing specific physical limitations (as can be done with some values other than the cosmological constant) or by non-quantizing the fine-tuning argument. That is, probability judgements need not be quantized to have force. I am not aware of many juries that provide a p-value in their verdicts. I think the fine-tuning argument is probably the most persuasive in the form of an aesthetic argument. Rather than spitting out a stream of numbers, the defender of the fine-tuning argument should broadly sketch the issue and trust her interlocutor’s intuition to recognize “fine-tuning” in a similar way to recognizing “beauty”.

However, the case does not seem to be settled on whether the universe is, indeed, fine-tuned. Moreover, there are likely epistemic considerations that are not being adequately evaluated. In particular, could the arguments for skeptical theism come back to undercut the fundamental probability evaluations? I am unsure. For the time being, I am inclined to say that the naive argument is tentatively persuasive, but I am unsure if it will hold up under scrutiny.

References

  1. Collins, R., The Teleological Argument: An Exploration of the Fine-Tuning of the Universe, in The Blackwell Companion to Natural Theology. 2009, Wiley-Blackwell. p. 202-281.

 

[1] That is, Rl/Rp (Λ) ≈ 1/10120

[2] Clearly, if the broadcast had been sent out in all different directions, this would undermine our inference to fine-tuning.

[3] We will ignore, for the time being, the physical limitations behind such a scenario