Tuesday, 9 May 2023

A Multi-Level view of LLM Intentionality

Bing AI's interpretation of a mind within a mind within a mind

Prompted by Keith Frankish's recent streamed discussion of LLM intentionality on YouTube, there's a particular idea I wanted to share which I'm not sure is widely enough appreciated but which I think gives a valuable perspective from which to think about LLMs and what kinds of intentions they may have. This is not an original idea of my own -- at least some of my thinking on this was sparked by reading about the Waluigi problem on LessWrong.

In this post, I'm going to be making the case that LLMs might have intentionality much like ours, but please understand that I'm making a point in principle and not so much arguing for the capabilities of current LLMs, which are probably not there yet. I'm going to be talking about what scope there is to give the benefit of the doubt to arbitrarily competent future LLMs, albeit ones that follow more or less the same paradigms as those of today. I'm going to try to undermine some proposed reasons for skepticism about the intentions or understanding of LLMs, not because I think the conclusions are wrong but because I think the arguments are too weak to support them.

I should also note that I will follow Keith (and Daniel Dennett, and others) in assuming an interpretivist account of intentions. That is, we should ascribe intentions to a system if and only if it helps to predict and explain the behaviour of the system. Whether it *really* has intentions beyond this is not a question I am attempting to answer (and I think that it is probably not determinate in any case).

The basic idea I want to introduce is that LLMs might have intentions and agency on multiple levels, so we may be missing something if we restrict our analysis to one level alone.

Sunday, 12 February 2023

Psychophysical Harmony

I wanted to write about the argument from psychophysical harmony against naturalism, even though it has no force for those like me who think that talk of qualia and phenomenality in general is confused. For the argument to make any sense at all, we will first have to assume that such considerations as the Knowledge Argument (i.e. the Mary's Room thought experiment from Frank Jackson) have persuaded us that physicalism is inadequate for the task of accounting for phenomenal experience. The argument from psychophysical harmony then goes farther, suggesting that there cannot be any sort of neutral natural explanation at all for the mysterious appropriateness of our experience in representing the physical world. Instead we might need to invoke God, or if not God then perhaps some other benevolent purposeful force or principle such as John Leslie's axiarchism.

Tuesday, 12 April 2022

The Distributed Brain Thought Experiment

I wanted to give a quick take on a thought experiment that has been doing the rounds on Twitter lately.

It's also worth reading this excellent write up by my friend at selfawarepatterns.com. That post also links a paper, which I confess I have not read. I'm going on the Twitter thread only.

The setup is to imagine being able to record and replay exactly what all the neurons in a brain are doing while experiencing something. When we replay, is the experience reproduced? What if the neurons are separated in space and in time?

I agree with whoever originally framed this thought experiment (Camilo?) that it poses a serious problem for physicalist functionalism. But it poses no problem at all for my view.

Tuesday, 29 March 2022

Strong Emergence and Free Will


In this post I will discuss whether the idea of strong emergence can help us to make sense of free will and a few related issues. These ideas have been on my mind lately largely due to the above discussion, but I've been thinking about this for a while.

Saturday, 5 February 2022

Applying Illusionism to Physical Reality

Many of you would be aware already of illusionism, if in fact there were many readers of this blog. But in case it's needed, illusionism, exemplified by Keith Frankish and Daniel Dennett, is an approach to the philosophy of consciousness which claims that qualia (the ineffable irreducible feelings associated with conscious experience, e.g. the redness of red), are entirely illusory and do not actually exist. As such, on this view the Hard Problem of Consciousness as defined by David Chalmers simply dissolves -- we need only explain why we believe we experience qualia, we do not need to explain how it is that qualia can be produced by physical stuff. As a bonus, illusionism may claim that even the idea of qualia are incoherent.

Illusionism is, I would say, a species of functionalism, and so is compatible with and largely overlaps with other glosses on functionalism such as computationalism. There may be corner cases where some illusionists may disagree with some computationalists, especially on how best to describe things, but the stories they tell about human consciousness are mostly compatible. As such, I count myself as an illusionist, a functionalist and a computationalist.

What I wanted to write about today is a strong rhyming I've noticed between the illusionist view of consciousness and how I think of the stuff of physical reality.

Thursday, 16 December 2021

The Computer Metaphor

Is the brain actually a computer? Or is this a metaphor or analogy which is often taken too literally? This issue has come up a few times in the past couple of years, and I wanted to give my take because some common responses do not seem right to me.

I'm going to argue the brain really is literally a computer and that this is not a metaphor or analogy at all. Many of the experts who disagree with this are neuroscientists, biologists and philosophers. We have much to learn from them about how the brain works, but that's not the only issue. To answer the question of whether a brain is a computer we also need to consider what a computer is, and I feel this side of the equation is too often neglected.

Tuesday, 17 November 2020

I was wrong about fine-tuning

Warning: this post will only make sense to anyone who has read my last two posts. I may take it down after a while as it's more of a reference to keep track of an ongoing discussion.

I've been playing a bit of analogy tennis with Philip Goff and Keith Frankish on twitter. You know how it goes, one person provides an analogy to explain their way of thinking, then another person proposes an alternative etc. It's been interesting, and I think I'm starting to get some clarity on how we should reason about observing unlikely events and guessing about how many chances the event had to arise.

For a start, it seems to be clear that the Total Evidence Requirement originally proposed by Roger White seems to be inadequate. It just doesn't work in general, even if it does seem to suggest the right answer in some cases.

Another outcome is that I am less certain that Goff's/White's arguments from analogy fail, as long as we grant the idea of essential identity of the universe. In fact I think I was most likely mistaken in my analysis in earlier posts.

Let's grant essential identity for now. I'm going to assume for the sake of argument that it makes sense to talk of "this universe" independently of its particulars, though I don't actually accept this. This means accepting that had this universe (U) been utterly different, and had another universe (V) been structurally identical to this one (so with its own copies of you and me and everything else we can see), then it makes sense to say that U is still this universe and V is not.

In general, for these analogies, I think we can better see what's going on if we parameterise them with variables of the form:

  • T: some type of improbable event
  • M (hypothesis): There are lots of opportunities for events of type T to occur.
  • S (hypothesis): There is only one (or very few) opportunities for events of type T to occur.
  • E1 (observation): Some event of type T has occurred
  • E2 (observation): Some specific opportunity has resulted in an event of type T
  • X: some optional selection effect

This should help us to judge whether the analogies are any good or not and if not why not. In all analogies, we're not supposed to have any specific prior beliefs about M or S, or at least the issue is not our absolute belief but whether our observations cause us to update our credence one way or another or not at all. I'm not going to present these analogies in chronological order of proposal, rather I'll try to put some structure on them.

Fine-tuning

  • T: A universe is improbably finely-tuned for life
  • M: There are many actual universes with random tuning
  • S: There is only one actual universe
  • E1: Some universe is fine-tuned
  • E2: This universe is fine-tuned
  • X: We can only observe any universe at all if E2 occurs. On M, other observers could have made observations if E1 occurred but not E2.
Many people including cosmologists and philosophers assume the E2 should increase credence in M. Philip Goff and Roger White think it does not.

Analogy: Monkey typing (from Philip Goff on twitter)
  • T: A monkey types an improbably coherent sentence of English due to random chance
  • M: Lots of monkeys are attempting to type sentences.
  • S: Only one monkey (Joey) is attempting to type sentences
  • E1: Some monkey has typed an improbably coherent sentence of English
  • E2: Joey has typed an improbably coherent sentence of English
  • X: Unspecified.
The point of this analogy is not to map directly onto the case of fine-tuning but to establish some common ground. Goff suggests that surely we can all agree that it would be absurd to conclude that E2 should increase credence in M in this case. Sounds plausible, but the selection effects if any would need to be thought through.

It's a bit too loose in general, lacking some of the structure of the other analogies. It's not clear why M should be lots of monkeys as opposed to lots of attempts from Joey, or lots of generic opportunities for improbable coincidences that have nothing to do with monkeys. The issue is clouded somewhat because if all that has happened is that you've seen a monkey typing, there may be more natural explanations such as that it was trained. We can stipulate that it must be chance, but when we imagine the event it's hard to take that stipulation to heart in the face of such improbable evidence, so intuitions are liable to be pushed the wrong way.

Analogy: Joker Scenario B (in Philip Goff's paper)

  • T: A monkey types an improbably coherent sentence of English due to random chance
  • M: Lots of monkeys are attempting to type sentences. Each observer can observe only one monkey. Your monkey is Joey.
  • S: Only one monkey (Joey) is attempting to type sentences
  • E1: Some monkey has typed an improbably coherent sentence of English
  • E2: Joey has typed an improbably coherent sentence of English
  • X: Your observation of any result at all is conditional on E2 occurring. On M, other observers could have made observations if E1 occurred but not E2.
This seems to be a good analogy to fine-tuning as long as we think the universe has an essential identity. It improves on the previous analogy by adding more structure in the text explaining the setup. The analogy is not just a mere translation of fine-tuning because it helps us to see what the fine-tuning scenario looks like if we take essential identity seriously, which some do not. But it doesn't intuitively settle the question for me, so other than that it doesn't help much. It seems plausible to me that we should increase credence in M. It seems obvious to Goff and White that we should not.

Analogy: Joe Bloggs wins the lottery (my previous post)
  • T: A lottery ticket happens to have the winning numbers
  • M: Lots of lottery tickets were sold this week
  • S: One lottery ticket was sold this week
  • E1: Some lottery ticket has the winning numbers this week
  • E2: Joe Bloggs' lottery ticket has the winning numbers this week
  • X: You will learn of any winning lottery ticket.
In this analogy, there is wide agreement that E2 should raise credence in M. This analogy serves to show the problems with Total Evidence Requirement (TER) proposed by White, but X seems to be disanalogous to fine-tuning so it doesn't help to resolve the argument. Keith Frankish also proposed a couple of good analogies to make this point about TER involving umbrellas and penguins, but they seem less relevant to fine tuning so I'll leave them out.

Analogy: You win the lottery (I) (my version)

As before, except:
  • E2: Your lottery ticket has the winning numbers this week
The purpose of this analogy is to illustrate that it shouldn't necessarily matter whether E2 involves the observer directly. If an analysis works from the third perspective, why not from the first? I am convinced that E2 should raise credence in M, but I'm not sure how convincing this is to others. In any case it's still not analogous to fine tuning given essential identity because the selection effect is different.

Analogy: You win the lottery (II) (Goff's version)

As before, except:
  • X: You will only learn of a win if you win yourself. On M, other observers could have observed wins if E1 occurred but not E2.
The point of this analogy is to correct my lottery analogies by bringing them back in line with fine tuning. It does so to a point, but not perfectly. X is subtly different, as you can observe your failure to win. This may or may not affect matters. Even so, I can't help but notice that if S is true some weeks, and M is true other weeks, then most if not all winners would be correct to guess M rather than S. But I suspect there may be some confounding issues here. More on this later.

I had some analogies that were trying to push the point that the universe doesn't have an essential identity, but I want to leave those out for now because I'm more interested in understanding how we should think about cases where there is an identity or where the observer is pre-selected.

Revisiting the lottery analogy

The crux of the issue for me lies in how we should interpret "You win the lottery (II)". Goff thinks that on winning the lottery, it is obvious that you should not raise your estimate of how many people entered. My initial analysis of "You win the lottery (I)" suggested otherwise. But on considering Goff's version of the analogy, I see that my analysis was based on the assumption that you would become aware of the lottery results regardless of who won, which is disanalogous to fine-tuning even if it does seem to cast doubt on TER.

The lottery is a useful analogy because it occurs repeatedly, with varying numbers of participants. It's not too much of stretch to imagine some weeks having very many tickets sold and some having very few. We can push that to an unrealistic extreme by assuming that M is true some weeks and S is true others. As noted, most if not all winners will find themselves in weeks where M is true. Winners should therefore guess that M is true.

This is very unintuitive though, because it seems wrong to suggest that the number of other people who bought tickets should have any effect on whether you have won a ticket yourself. Perhaps intuition is onto something.

In the real world, the only way the lottery could swing between extremes with a pool of millions of players is if the decisions of players to play or not to play are correlated. It may be driven by the weather, or by the size of the jackpot, or the economy, or pandemic lockdowns, who knows. So whether other people play or not may not have much to do with why you have won so much as that you have played at all. The basic fact that you have bought a ticket suggests that M is probably true. This is enough to account for why most winners would be right to guess M. Most ticket purchasers will be right to guess M.

The only way we can rule out this effect is to be sure that your decision to purchase a ticket has nothing to do with the rest of the world. The simplest way to do this is to stipulate that for some reason you are in a very special position such that you alone always buy a ticket. When S is true, you must then be the sole purchaser. It seems you are as likely to win in an S-week as an M-week, so you alone should not update your credence for M based on E2, even if other winners should.

How to relate these issues back to fine-tuning? As noted previously, we already have the disanalogous point that it is possible for you to see an unsurprising outcome. It's also unclear whether we can port over our understanding of the correlations between the decisions of purchasers to play or not to play to fine-tuning.

When it comes to fine-tuning, we know nothing at all about the prior probability for a universe to form. To take this back to our lottery example, this would be like not knowing how many players like you buy a ticket every single week. Though you are in a special position, you don't know you are. For this point we can scrap most of the S-week and M-week stuff and just think about the question about how special you think you are -- are you really likely to be the only player in the national lottery in a given week? It also seems to have nothing to do with winning. Maybe the best reason to believe M is the Copernican principle of mediocrity -- you probably shouldn't believe you're special without good evidence. If I translate this back to the multiverse scenario, this is just the argument that you shouldn't believe that only your own possible world is so special as to be singled out for actual existence. This may be an argument for the multiverse but it has nothing to do with fine-tuning.

I think I'm ready to call it a wash or concede the point. I still think that fine-tuning suggests a multiverse, but perhaps only because of the identity issues.