Tuesday 12 April 2022

The Distributed Brain Thought Experiment

I wanted to give a quick take on a thought experiment that has been doing the rounds on Twitter lately.

It's also worth reading this excellent write up by my friend at selfawarepatterns.com. That post also links a paper, which I confess I have not read. I'm going on the Twitter thread only.

The setup is to imagine being able to record and replay exactly what all the neurons in a brain are doing while experiencing something. When we replay, is the experience reproduced? What if the neurons are separated in space and in time?

I agree with whoever originally framed this thought experiment (Camilo?) that it poses a serious problem for physicalist functionalism. But it poses no problem at all for my view.

My view, if you recall, is that the mind is not a physical thing but an abstract pattern that evolution has happened upon and is exploiting in order to get a bag of meat to navigate its environment successfully and reproduce. To say that the mind is the brain is in my view mistaken -- it is like saying that the hexagon just is the shape of a cell in a bee hive. Both the mind and the hexagon are abstract structures and are not strictly identical to any physical realisation. Bees have happened upon the hexagon, humans have happened upon the mind.

There's some room for quibbling over what exactly it is we're recording and replaying. Setting aside what is technically possible in practice, I think the thought experiment is most interesting if we assume that we can record and replay absolutely everything about what a neuron is doing during the experience. This means not only firing but strengthening its response to some inputs and decreasing its response to others (what corresponds in a normal brain to strengthening and weakening synaptic connections).

How this relates to this problem is that the conscious experience of a mind is something that exists timelessly and platonically in a space of all possible minds and all possible experiences. So dividing up the brain and replaying the action of neurons makes no difference to the experience of the mind. But to do so in the fashion posited in the thought experiment causally disconnects that mind from interaction with the rest of the physical world, so while the experience still exists (as it would even if the brain had never existed in our physical world in the first place), it is no longer really part of our particular world in any meaningful sense. As such it is not of moral concern to us and it doesn't matter if we regard it as conscious or not.

I disagree with the analysis over at SelfAwarePatterns, which brings up a point about whether we can be conscious of anything if we don't remember it. I think this misses the point, because if we are conscious of something, we are conscious of it in the moment regardless of whether we remember it later. None of us are going to remember anything happening at the moment in two hundred years, but that doesn't mean we are not conscious now! There's no time limit mentioned for the duration of the conscious experience. It could be on the order of several minutes. I think this is certainly enough time to regard it as conscious even if forgotten immediately afterward.

That post argues that in step 3, when we block neurotransmitters, we are now blocking the ability to remember earlier attempts, so this is where consciousness is lost. This departs from my interpretation, as I allow that we can record and replay the effect of neurotransmitters in strengthening and weakening synaptic connections.

Even so, I see no important difference here from step 2. The activity of the neurons is replayed exactly, so there is no room for the subject to feel confused or any difference in memory recall in the moment. The neurons fire exactly as they did in step one, so if the subject is not confused in step 1, then she is not confused in step 2. If the subject does not recall earlier attempts in step 1, the subject does not recall earlier attempts in step 2. It cannot be that memory formation is completely lost, because at later times during the experience, the subject's neurons will fire  as if earlier times in the experience are remembered.

(Perhaps a nitpick on this point, the post suggests that the problem is that in step 3 the subject cannot form new memories. On my interpretation of the thought experiment, I would say what makes the replay weird with respect to memory is not so much that new memories are not laid down, bur rather that memories of the original experience or any repeat experiments are blocked. Presumably the original experience allowed new memories to form, so the new experience should produce all the neural activity required for the same -- but without access to those original memories, so it's basically overwriting identical memories on top of the original).

The intuition for why consciousness must vanish with the potential to form or recall memories and so on seems to related to the fact that the causal structure of the brain has been interrupted, and the idea that causal structure is important for consciousness. I agree, but only because I already think consciousness is an entirely abstract structure. Relying on actual cause and effect is a problem for a physicalist because causal structure is a pattern we impose on nature. For actual physical causation to matter for consciousness, there would have to be an objective fact of the matter on what causal structure is implemented by a physical structure, and this does not seem to be on a sound scientific or philosophical footing (at least in my humble opinion). But if, as I believe, causal structure is entirely abstract, then there is no problem for me in identifying the mind with some causal structure. That causal structure is a reasonable interpretation of the action of an intact brain undergoing an experience, but it doesn't really describe what is happening when the neurons are separated in time and space and are just replaying individual behaviour.

The advantage for my view is that I can regard a brain as conscious without requiring that there be a definitive objective fact of the matter of what causal structure it is implementing. It is enough that there be a reasonable causal interpretation which captures how we see it interact with the world around it. I, and you, should reasonably interpret your brain to be implementing the causal structure of your mind. But the existence of your conscious experience is not dependant on that interpretation -- it always exists regardless of whether it is physically instantiated.

Unfortunately, I expect none of this makes very much sense to anyone who has not been steeping in the ramifications of my idiosyncratic views as long as I have.

4 comments:

  1. Including the synapses in the recording is interesting. Although playing the recording back in the same substrate really pushes things much further past the bounds of feasibility. Not that 86 billion neural clamps wasn't already well past it anyway. Along those lines, we could take it all the way down to the molecular level, and say we recorded all the state transitions that happen at that level.

    It focuses things on the causal structure, and how crucial it might be. I think it is, but as I noted in my post, it really depends on how you conceive of consciousness.

    A deleted passage from my post included an analogy of recording a video game where the computer was playing itself. If we play a recording of that game back, a recording made at the virtual machine level and including all the machine states, but in the playback that only moves from one state to another because that’s what’s next in the recording, is a game still happening? I don’t think it is. But someone could insist it is because in the original event the causality was right, or with some other reasoning.

    Strictly speaking, there may not be a fact of the matter answer on the game. It depends on what we require to be there for it to be a “game”. That’s easy to accept in the case of a game, but very hard to accept in the case of consciousness. But really, there’s no good reason to see them as different in this regard. We only have a strong intuition that they are because of the instinctive dualism we’re all born with.

    As usual, I don’t see platonism making a difference here. Even if we’re dealing with platonic structures, it seems like we still have two different platonic structures here. And the issue remains what must be preserved in the second version for it to be in the same category as the first. But as always, I may be missing something with the platonic view.

    ReplyDelete
    Replies
    1. Hi Mike,

      There is no fact of the matter about the game. My view is that there is no fact of the matter about consciousness either.

      So, my view is that the conscious experience exists regardless. What there is no fact of the matter about is whether some physical system implements a particular mind. There are only more and less useful or reasonable interpretations.

      We can imagine some sort of gradual process whereby the normal brain you regard as conscious is very smoothly disassembled into the final separated neuron state you regard as unconscious. I'll leave out the details, but let's just assume it is as smooth as in something like the fading qualia thought experiment.

      This is usually presented as a dichotomy: either consciousness switches off abruptly at some point, or else the qualia fade out smoothly without the subject realising. Neither is very plausible.

      My view presents a third option. The conscious experience continues to exist (and always exists, platonically). What smoothly fades out is the utility of identifying that mind with that physical system. We start out with a system we should regard as implementing that mind. Especially before any intervention, we can interact with it and so on, and so it is meaningfully a part of our world and we are part of its. But we end up with a system that doesn't interact with anything in our world. So there's no utility in regarding it as a conscious agent inhabiting our world. That conscious agent still exists, but not in our world any more.

      Delete
    2. Thanks DM. It seems like the intersection of our agreement is to which extent the experience exists in this world, for which there's no fact of the matter as we start altering aspects of the physical implementation.

      But I would think platonism requires more than just the conscious experience having an independent existence. Every variation of the experience has its own independent existence. And every variation on the border between being the experience and not being the experience. And then every variation of not being the experience. And which of these structures is in the category of experience, and which is in the category of non-experience seems like the distinction where there’s no sharp fact of the matter. (Admittedly, there will be structures where it’s a much easier judgment call than the edge cases. It’s only these edge cases where the conundrums arise.)

      So with platonism, I can see the case for the experience does always existing. But of all the abstract structures existing, what is or isn’t the experience still seems like an issue. Again, I may well be missing something with all this.

      Delete
    3. Hi Mike,

      There's one thing you may be forgetting, which is that there is no need to interpret an abstract information processing system in one way or another. The system is what it is. The causal flow is defined as part of the system. So we don't have the same problem where we gradually change the physical system and nudge us more towards interpreting it as a different causal system.

      To gradually change an abstract causal information processing system, you would necessarily need to change how it processes information and so how it behaves. As you degrade the functionality, then it's no surprise that you would degrade the consciousness. You could smoothly make it more and more simple, as if evolving in reverse from human to ape to monkey all the way back to bacterial level intelligence.

      So, yes, there is no sharp fact of the matter with regard to where consciousness starts or stops. But the degrading of consciousness goes hand in hand with the degrading of functionality, which you don't get in fading qualia experiments or in this neural firing experiment, where what functionality it implements is in the eye of the beholder.

      Delete