Tuesday, 17 November 2020

I was wrong about fine-tuning

Warning: this post will only make sense to anyone who has read my last two posts. I may take it down after a while as it's more of a reference to keep track of an ongoing discussion.

I've been playing a bit of analogy tennis with Philip Goff and Keith Frankish on twitter. You know how it goes, one person provides an analogy to explain their way of thinking, then another person proposes an alternative etc. It's been interesting, and I think I'm starting to get some clarity on how we should reason about observing unlikely events and guessing about how many chances the event had to arise.

For a start, it seems to be clear that the Total Evidence Requirement originally proposed by Roger White seems to be inadequate. It just doesn't work in general, even if it does seem to suggest the right answer in some cases.

Another outcome is that I am less certain that Goff's/White's arguments from analogy fail, as long as we grant the idea of essential identity of the universe. In fact I think I was most likely mistaken in my analysis in earlier posts.

Let's grant essential identity for now. I'm going to assume for the sake of argument that it makes sense to talk of "this universe" independently of its particulars, though I don't actually accept this. This means accepting that had this universe (U) been utterly different, and had another universe (V) been structurally identical to this one (so with its own copies of you and me and everything else we can see), then it makes sense to say that U is still this universe and V is not.

In general, for these analogies, I think we can better see what's going on if we parameterise them with variables of the form:

  • T: some type of improbable event
  • M (hypothesis): There are lots of opportunities for events of type T to occur.
  • S (hypothesis): There is only one (or very few) opportunities for events of type T to occur.
  • E1 (observation): Some event of type T has occurred
  • E2 (observation): Some specific opportunity has resulted in an event of type T
  • X: some optional selection effect

This should help us to judge whether the analogies are any good or not and if not why not. In all analogies, we're not supposed to have any specific prior beliefs about M or S, or at least the issue is not our absolute belief but whether our observations cause us to update our credence one way or another or not at all. I'm not going to present these analogies in chronological order of proposal, rather I'll try to put some structure on them.

Fine-tuning

  • T: A universe is improbably finely-tuned for life
  • M: There are many actual universes with random tuning
  • S: There is only one actual universe
  • E1: Some universe is fine-tuned
  • E2: This universe is fine-tuned
  • X: We can only observe any universe at all if E2 occurs. On M, other observers could have made observations if E1 occurred but not E2.
Many people including cosmologists and philosophers assume the E2 should increase credence in M. Philip Goff and Roger White think it does not.

Analogy: Monkey typing (from Philip Goff on twitter)
  • T: A monkey types an improbably coherent sentence of English due to random chance
  • M: Lots of monkeys are attempting to type sentences.
  • S: Only one monkey (Joey) is attempting to type sentences
  • E1: Some monkey has typed an improbably coherent sentence of English
  • E2: Joey has typed an improbably coherent sentence of English
  • X: Unspecified.
The point of this analogy is not to map directly onto the case of fine-tuning but to establish some common ground. Goff suggests that surely we can all agree that it would be absurd to conclude that E2 should increase credence in M in this case. Sounds plausible, but the selection effects if any would need to be thought through.

It's a bit too loose in general, lacking some of the structure of the other analogies. It's not clear why M should be lots of monkeys as opposed to lots of attempts from Joey, or lots of generic opportunities for improbable coincidences that have nothing to do with monkeys. The issue is clouded somewhat because if all that has happened is that you've seen a monkey typing, there may be more natural explanations such as that it was trained. We can stipulate that it must be chance, but when we imagine the event it's hard to take that stipulation to heart in the face of such improbable evidence, so intuitions are liable to be pushed the wrong way.

Analogy: Joker Scenario B (in Philip Goff's paper)

  • T: A monkey types an improbably coherent sentence of English due to random chance
  • M: Lots of monkeys are attempting to type sentences. Each observer can observe only one monkey. Your monkey is Joey.
  • S: Only one monkey (Joey) is attempting to type sentences
  • E1: Some monkey has typed an improbably coherent sentence of English
  • E2: Joey has typed an improbably coherent sentence of English
  • X: Your observation of any result at all is conditional on E2 occurring. On M, other observers could have made observations if E1 occurred but not E2.
This seems to be a good analogy to fine-tuning as long as we think the universe has an essential identity. It improves on the previous analogy by adding more structure in the text explaining the setup. The analogy is not just a mere translation of fine-tuning because it helps us to see what the fine-tuning scenario looks like if we take essential identity seriously, which some do not. But it doesn't intuitively settle the question for me, so other than that it doesn't help much. It seems plausible to me that we should increase credence in M. It seems obvious to Goff and White that we should not.

Analogy: Joe Bloggs wins the lottery (my previous post)
  • T: A lottery ticket happens to have the winning numbers
  • M: Lots of lottery tickets were sold this week
  • S: One lottery ticket was sold this week
  • E1: Some lottery ticket has the winning numbers this week
  • E2: Joe Bloggs' lottery ticket has the winning numbers this week
  • X: You will learn of any winning lottery ticket.
In this analogy, there is wide agreement that E2 should raise credence in M. This analogy serves to show the problems with Total Evidence Requirement (TER) proposed by White, but X seems to be disanalogous to fine-tuning so it doesn't help to resolve the argument. Keith Frankish also proposed a couple of good analogies to make this point about TER involving umbrellas and penguins, but they seem less relevant to fine tuning so I'll leave them out.

Analogy: You win the lottery (I) (my version)

As before, except:
  • E2: Your lottery ticket has the winning numbers this week
The purpose of this analogy is to illustrate that it shouldn't necessarily matter whether E2 involves the observer directly. If an analysis works from the third perspective, why not from the first? I am convinced that E2 should raise credence in M, but I'm not sure how convincing this is to others. In any case it's still not analogous to fine tuning given essential identity because the selection effect is different.

Analogy: You win the lottery (II) (Goff's version)

As before, except:
  • X: You will only learn of a win if you win yourself. On M, other observers could have observed wins if E1 occurred but not E2.
The point of this analogy is to correct my lottery analogies by bringing them back in line with fine tuning. It does so to a point, but not perfectly. X is subtly different, as you can observe your failure to win. This may or may not affect matters. Even so, I can't help but notice that if S is true some weeks, and M is true other weeks, then most if not all winners would be correct to guess M rather than S. But I suspect there may be some confounding issues here. More on this later.

I had some analogies that were trying to push the point that the universe doesn't have an essential identity, but I want to leave those out for now because I'm more interested in understanding how we should think about cases where there is an identity or where the observer is pre-selected.

Revisiting the lottery analogy

The crux of the issue for me lies in how we should interpret "You win the lottery (II)". Goff thinks that on winning the lottery, it is obvious that you should not raise your estimate of how many people entered. My initial analysis of "You win the lottery (I)" suggested otherwise. But on considering Goff's version of the analogy, I see that my analysis was based on the assumption that you would become aware of the lottery results regardless of who won, which is disanalogous to fine-tuning even if it does seem to cast doubt on TER.

The lottery is a useful analogy because it occurs repeatedly, with varying numbers of participants. It's not too much of stretch to imagine some weeks having very many tickets sold and some having very few. We can push that to an unrealistic extreme by assuming that M is true some weeks and S is true others. As noted, most if not all winners will find themselves in weeks where M is true. Winners should therefore guess that M is true.

This is very unintuitive though, because it seems wrong to suggest that the number of other people who bought tickets should have any effect on whether you have won a ticket yourself. Perhaps intuition is onto something.

In the real world, the only way the lottery could swing between extremes with a pool of millions of players is if the decisions of players to play or not to play are correlated. It may be driven by the weather, or by the size of the jackpot, or the economy, or pandemic lockdowns, who knows. So whether other people play or not may not have much to do with why you have won so much as that you have played at all. The basic fact that you have bought a ticket suggests that M is probably true. This is enough to account for why most winners would be right to guess M. Most ticket purchasers will be right to guess M.

The only way we can rule out this effect is to be sure that your decision to purchase a ticket has nothing to do with the rest of the world. The simplest way to do this is to stipulate that for some reason you are in a very special position such that you alone always buy a ticket. When S is true, you must then be the sole purchaser. It seems you are as likely to win in an S-week as an M-week, so you alone should not update your credence for M based on E2, even if other winners should.

How to relate these issues back to fine-tuning? As noted previously, we already have the disanalogous point that it is possible for you to see an unsurprising outcome. It's also unclear whether we can port over our understanding of the correlations between the decisions of purchasers to play or not to play to fine-tuning.

When it comes to fine-tuning, we know nothing at all about the prior probability for a universe to form. To take this back to our lottery example, this would be like not knowing how many players like you buy a ticket every single week. Though you are in a special position, you don't know you are. For this point we can scrap most of the S-week and M-week stuff and just think about the question about how special you think you are -- are you really likely to be the only player in the national lottery in a given week? It also seems to have nothing to do with winning. Maybe the best reason to believe M is the Copernican principle of mediocrity -- you probably shouldn't believe you're special without good evidence. If I translate this back to the multiverse scenario, this is just the argument that you shouldn't believe that only your own possible world is so special as to be singled out for actual existence. This may be an argument for the multiverse but it has nothing to do with fine-tuning.

I think I'm ready to call it a wash or concede the point. I still think that fine-tuning suggests a multiverse, but perhaps only because of the identity issues.

Sunday, 15 November 2020

More on Fine-Tuning from Philip Goff

Philip Goff

To follow on from my previous post, Philip Goff has now released a preview of an academic paper in progress on this topic. Currently titled "Is the Fine-Tuning Evidence for a Multiverse?", this paper is a defence of the very same Roger White paper I had been critiquing, and the main argument it tackles relates to some points I made.

I thought I should give my take, but before I do I'll note that I'm not sure if the paper preview link may become stale at some future date or if the paper may be revised in ways that make my comments obsolete. In any case, if you're interested, you should read the paper and my previous post if you haven't already.

Friday, 6 November 2020

On Fine-Tuning and the Multiverse

The Fine-Tuning Problem


I've been thinking a lot lately about the problem of fine-tuning and how it may or may not be resolved by appeal to a multiverse. This has been prompted by recent correspondence with the philosopher Philip Goff and by the book I am currently reading, The Beginning of Infinity by David Deutsch. I hope to get around to discussing Deutsch's ideas in future, but for now I'm focusing on an argument put forth by Roger White and endorsed by Goff.

The fine-tuning problem is just that the laws of physics describing the universe seem to be fine-tuned so as to allow life to exist. While trying not to be too parochial in our assumptions about what life requires to exist, at least in a universe even vaguely like ours, it seems reasonable to assume that it must at least have time and a stable environment in which to evolve, access to sources of energy and a varied assortment of building blocks from which to assemble itself.

But if the physical constants were slightly different, then perhaps stars, heavier elements or atoms could not have formed. Perhaps the universe would never have expanded from its initial hot, dense state, or would have recollapsed too quickly, or perhaps it would have expanded too quickly for particles to come together at all. When we say the universe appears to be fine-tuned, we are claiming that it appears to be delicately balanced so as to allow complex evolving self-replicators (i.e. life) to exist -- that almost any tweak we imagine to the constants or the laws of physics would result in a simpler, lifeless universe. This curious observation seems to demand an explanation.

Tuesday, 8 March 2016

Rescuing Computationalism with Platonism

In my last post I discussed some issues with identifying objectively which computations a physical system could legitimately be interpreted as instantiating. Computationalism is usually taken to be the view that all it takes to create a conscious mind is to implement the right computation, so the idea that we can't tell objectively when a computation is implemented implies either that there is no objective fact of the matter regarding when minds exist, that all minds exist (everywhere) or that no minds exist. None of these conclusions is particularly appealing!

I find the arguments discussed on the last post to be somewhat persuasive. Indeed, I had had similar concerns before becoming aware of these. What's more, I think the problem may be worse than even Putnam, Searle and Bishop have suggested.

Wednesday, 24 February 2016

Putnam, Searle and Bishop: The Failure of Physicalist Computationalism



I wanted to come out of blog dormancy to write up my thoughts on what I feel is a very important argument against computationalism. The argument advances the view that there is no objective fact of the matter about which computations a physical system is computing, and if this is the case it would certainly seem to problematise computationalism (the view that what it is to be a conscious mind is just to perform the right kind of computation).

In this post I will explain the argument and some of the common responses to it. I'll reserve my own response (which is quite different from that of most computationalists) for a future post.

Monday, 9 June 2014

Book Review: Longing to Know

As a pretty convinced atheist, I am not the target audience for Longing to Know, by Esther Lightcap Meek, which seeks to explain a view of knowledge in general, but in particular of how we might come to know God. When it was nominated for discussion by my philosophy reading group, I hoped that I would at least get some interesting discussion of epistemology out of it, and to an extent I did, but my experience of the book could be more broadly characterised by frustration and a Longing to Go (away and do something else).


Thursday, 1 May 2014

The Ontological Argument

A frustrating interruption of Internet service has rendered me unable to work, so I might as well put my time to good use by catching up on blogging on the assumption that I will be able to upload this text at some later time. It also means, unfortunately, that I am unable to use references as I write. As such, I might get some things wrong.

The issue I want to address is the so-called Ontological Argument for the existence of God as first proposed by St. Anselm almost a thousand yeasrs go and further developed and promoted by Muslim and Christian philosophers, including relatively recent versions by Alvin Plantinga, Kurt Gödel and others.

I was asked to write about this some time ago by a fellow commenter on Massimo Pigliucci's (now no longer active) blog 'Rationally Speaking', and I said I would. However I have been slow to do so for a couple of reasons (in addition to the usual procrastination!). The first is that the argument is so obviously nutty that it seems to be scarcely worth the time to address it. The second is that it is actually quite difficult to point out precisely what is wrong with it!