Is the brain actually a computer? Or is this a metaphor or analogy which is often taken too literally? This issue has come up a few times in the past couple of years, and I wanted to give my take because some common responses do not seem right to me.
I'm going to argue the brain really is literally a computer and that this is not a metaphor or analogy at all. Many of the experts who disagree with this are neuroscientists, biologists and philosophers. We have much to learn from them about how the brain works, but that's not the only issue. To answer the question of whether a brain is a computer we also need to consider what a computer is, and I feel this side of the equation is too often neglected.
Reasons to reject my thesis tend to fall into a number of common categories.
It's just the latest analogy
Matthew Cobb is an excellent communicator on the brain and a proponent of the view that analogies to various types of technology such as hydraulic control, telegraphs and now computers have both helped and hindered our understanding of the brain over the centuries.
The idea that the brain could cause distant body parts to move as occurs in hydraulic automatons was proposed by Descartes. This may have moved the discussion on by outlining a sketch of an idea about how the brain might control the body, but we are clearly not literally hydraulic automatons. The telegraph was later proposed as a better analogy for how the brain sends electric signals along wires. But the nervous system is not a telegraph network either. Similarly, while the brain as computer may be an informative analogy to a point, for Cobb the brain is not literally a computer and thinking it is may be holding us back. Analogies are never perfect, and taking them too seriously can lead us in the wrong directions.
I enjoyed his book "The Idea of the Brain" a great deal when I read it, but I have a nit to pick with the part discussing this subject, an extract of which forms
this article in the guardian, even while I think he makes a number of excellent points. Cobb convincingly argues that the computer analogy may be leading researchers astray when they take concepts from computer science such as encoding/decoding too literally and search for analogues in the brain. The tendency to treat neuroscience as akin to the project of reverse-engineering a piece of computing technology is not likely to be fruitful.
Where we disagree is in what we mean by the term "computer". It seems to me that what Cobb has in mind is something like a twenty-first century technological artifact, but when I say the brain is a computer I mean something more general by the term, encompassing everything from the Antikythera mechanism to calculators to iPads to
DNA computers, quantum computers and beyond. To me, a computer is anything that processes information in a way that could in principle be accomplished by a Turing Machine, or more specifically, anything which has this kind of information processing as its primary purpose. (
This article from Blake Richards goes into greater detail on this idea and is well worth a read).
So, while thinking of the brain as something like a modern computing device may indeed be just the latest in a series of analogies, this does not mean that a brain is not actually a computer in the broader sense. I guess the reason people in this debate talk past each other is that the term is overloaded, and explaining exactly what we mean is cumbersome. Even so, it seems to me that anyone who says the brain is a computer is clearly using the generic term.
The same is not true for those who deny that the brain is a computer. There are ways to deny that the brain is even a generic computer, and so that's what I'd be looking for in an article entitled "Why your brain is not a computer". But imagine seeing an article called "Why your heart is not a pump" and then reading a description of all the ways the heart is unlike a bicycle pump. Based on the arguments in Cobb's article, what he wants to say could perhaps be more accurately summarised as "The brain is quite unlike contemporary computing technology in a number of ways", which is accurate but not exactly catchy. As such, I don't really begrudge Cobb his phrasing too much, but my feeling is that the same charity is often not extended to those like me who make the contrary claim that the brain is a computer.
The brain does not process information like a computer, though
Computers stereotypically process information by following algorithms, which are step by step "recipes" which can be decomposed into a sequence of many trivial atomic operations. The messy, chaotic and absurdly parallel computing environment of a biological brain is anything but classically algorithmic. When looking at how a brain "calculates" the predicted trajectory of a ball, it does not multiply variables or arrange information in neat data structures. Its behaviour is probablistic and unpredictable in detail. Even if in principle a Turing Machine or some other computer could perform the same information processing function as the brain by following some algorithm, the fact is that this is not how the brain itself does it. To call it a computer obscures this fact.
I don't accept this, because as long as the laws of physics are computable (a claim which I concede may be disputed), then there is some algorithm which would simulate the operation of the brain. In virtue of running this algorithm, a classic Turing Machine could perform the same information processing function as the brain, and in much the same way as the brain, and do so by following an algorithm.
We can interpret the brain as an instantiation of a machine computing this algorithm -- as a computer simulating a brain. If there are variables and data structures in such an algorithm, they are not tracking the position or velocity of objects such as balls in flight but instead the state of neurons and neurotransmitters. If balls are represented at all in the algorithm (and I expect they are), it is at a higher emergent level far removed from the algorithm itself.
This "simulation" gambit is why I think it's generally fair to call any information processing system a computer even if computers follow algorithms and the system does not at first glance appear to be following an algorithm.
But then everything is a computer!
It's true that we can interpret anything as a computer, computing the function that describes its own behaviour as I just did for the brain. It is generally agreed that such interpretations are not usually particularly useful. As such, it's important to note that I am not merely claiming that the brain can be interpreted as a computer, I am claiming that it is a computer. There's a subtle distinction here.
This distinction is partly teleological. As noted before, for me it is important for a computer that computation be its primary (or at least a) purpose. Here I can foresee objections from some quarters that biological evolution is mindless and so has no purposes, but here I would just quote Dennett's writing on
the Design Stance. If the heart is a pump, it is not just because it happens to push fluids around, it is because it has evolved to do so.
A secondary consideration is that computers are generally engaged in information processing which is in some way about more than themselves. Bracketing
concerns from the likes of Putnam, Searle and Bishop, a rock is most easily interpreted as computing what it is to be that precise rock, but real computers are engaged in more diverse computing tasks, which may be about prime numbers, stock prices or even how to find a mate for the organism hosting it. As discussed for the brain, what the computation is about may not be very obvious from inspection of the lowest level descriptions of the system, but when we call something a computer we do so because we have reason to believe that the objects of the computation are in some way implicit in its behaviour nevertheless.
Ultimately, there may be no absolute fact of the matter on what is and is not a computer. But it makes sense to regard as actual computers anything which appears to be processing information which is about something else and especially which has this as its purpose in some sense.
So what if the brain is a computer?
Even if I can justify the claim that the brain is a computer in a way a rock is not, it may yet seem a little anaemic. I seem to be saying little more than that the brain is processing information for the body, which is not news to most people.
I don't think the claim is so empty. There are a few ways in which we can imagine the claim might be false.
- If brains process information in ways that could not even in principle be matched by a Turing machine (e.g. Roger Penrose thinks that human mathematicians can reason in ways no computer ever could)
- If brains have another primary purpose (e.g. Descartes idea that the brain is an organ which communicates with the soul via the pineal gland).
These days, it's only really the first claim that is relevant. It seems that most people do agree that brains have evolved primarily to process information. I won't discuss these ideas further in this post other than to say that I disagree with them.
I can see a further charge that it is reductive to call the brain simply a computer because this ignores a number of non-computational functions it provides. A slightly facetious list might be something like:
- The brain acts as a gland secreting hormones
- The brain's mass aids in balance
- The brain keeps us from getting too overweight by burning calories
In light of such considerations, I'm happy to concede that the brain may not be just a computer. But for my purposes, in discussions pertaining to philosophy of mind, by far the most salient fact about the brain is that it is a computer. This is because I believe it is by virtue of its computations that we are conscious.
As an aside, I think it's still a computer even if this is not the case and panpsychism (for instance) happens to be true. The brain may be the almost miraculous site where a unified phenomenal consciousness comes together, but it hasn't evolved for this purpose. Evolution is blind to everything but behaviour. What evolution needs from the brain is just that it be a computer, so arguably this is its primary purpose even if its ability to produce qualia and phenomenal consciousness means more to us.
I agree across the board. I actually rarely debate this issue anymore, because it seems evident to me that computation, in and of itself, isn't the issue. The real issue is any mechanistic or functional understanding of the brain, anything that actually threatens to succeed in reducing it to physical processes.
ReplyDeleteIf computation was the issue, it seems like the people opposed to the idea would have alternative theories. But they seldom do. And when they do, it's things like what you mentioned (computation different from what commercial computers do), or embodied cognition, control theory, or something else which is really more an elaboration than an alternative.
In one recent interview on the Brain Inspired podcast, a couple of anti-computationalists questioned the need to even supply an alternative, implying that any attempt to actually understand the brain is misguided. I have a hard time finding a productive way to engage with this attitude.
Thanks Mike.
DeleteOn the "elaboration rather than alternative" point
, I recall having a discussion with Massimo Pigliucci where he was very down on the "naive" idea that the brain could be a computer while being pretty open to the idea that connectionism might be true. This is vexing to me because connectionism is really just an elaboration of computationalism as far as I'm concerned. On my account, a connectionist computer is still a computer and can even be implemented on a classic digital symbolic computer running an algorithm, so it is in no way an alternative.
I suspect Massimo's objection comes from applying machine metaphors to biology. He's written articles objecting to that practice many times. In his case, he probably sees computation as another case of that. To some degree, I can see his point, but I don't think it survives the points you make in this post.
DeleteI've found that I can usually discuss the operations of the brain without explicitly mentioning computation. But once in a while some variation of it is just the right word to use. I usually catch grief about it, even if it's far from my main point. A lot of people really hate the idea.
Great post DM!
ReplyDeleteAh good, I thought I couldn’t get logged in. The preview function for submitting doesn’t seem to work here.
ReplyDeleteYears ago at one of Massimo’s sites, maybe 2017, I recall him exclaiming “Brains ain’t computers!” and then citing what I thought was a lame article to that effect. This surprised me. I emphasized how divergent that stance seemed to be with modern science. He didn’t dig in his heels on this however, probably realizing that this was merely a terminological dispute that simply depended upon a definition for “computer”. Clearly brains work differently than the computers that we build, though it’s also clear that they evolved to accept input information and process it for output function. That’s their “teleonomy”.
The Mathew Cobb article seems reasonable to me. I don’t quit interpret him to mean that associating brains with computers has ruined things, but rather that horrible problems exist in brain science today and it’s possible that some of those problems stem from associating brains with computers. I personally consider this association to be tremendously appropriate, though I can’t blame him for being suspicious given how problematic things seem today.
In any case DM, I agree with what you’ve written here just as Mike does. But let’s take this further. I’m going to provide an evolutionary scenario that makes sense to me. Hopefully you (or anyone else) will then address where this perspective seems right and where it seems wrong.
(Continued…)
Just as our robots operate by means of non-conscious computers, biological machines must have taken this step by means of neuron based central organism processors, or “brains”. And just as our robots can do well under reasonably closed environments, such as a specific factory applications, it may be that it wasn’t possible for sufficient contingency programming to permit life to continue advancing non-consciously under progressively more diverse circumstances.
ReplyDeleteSo how would a phenomenal experience dynamic add to the function of biological robots? It seems to me that this would originally have existed as an epiphenomenal mutation, quite like an originally functionless extra finger might ultimately become adaptive and continue evolving. So this experiencer that earlier was simply along for the ride, would now be given a chance to actually decide somethings regarding organism function. Here various things would feel good and bad to it as before, though in certain situations the organism would go with what the still amazingly primitive experiencer “chooses” in that regard. But theoretically this determinant wouldn’t improve upon the original situation until the experiencer were provided with a capacity to “think”, or in some sense to recall whether a past situation made it feel good or bad and so decide to do more or less of that. Theoretically the brain would correspond with such decisions.
That’s the bare bones of my thoughts on the evolution of consciousness. Making it to this point should have permitted the evolution of even the human. I’ll try one more as well that gets into my perception of what specifically evolved, as well as why I think modern science has had trouble here.
(Continued…)
Hi Eric,
DeleteThanks!
In response to this and your email, I'll just say that personally I don't think we need any special physical explanation of phenomenal consciousness, such as anything to do with E.M. fields or what have you. I think we can have a purely functional account of phenomenal consciousness. It's possible that the implementation of this in brains might involve E.M. fields or soemthing, but to me this is incidental. The same thing out to be achievable in principle with only ordinary neural firing or any other Turing-complete substrate.
As I think the explanation is functional, I don't think epiphenomenalism is quite right. I think phenomenal consciousness is just the ability to reflect on what our senses are perceiving, which is a prerequisite for higher order planning, prediction, abstraction and learning. Evolution could easily have selected for these useful abilities.
Okay DM, I’d say that’s a pretty common position today. Popular consciousness theories in general seem to have adopted it despite the observations of people like Searle, Block, Bishop, and Schwitzgebel. Personally I think they might have done a better job illustrating unnatural implications of this view, or some observations which I’ve developed. But I was hoping to avoid that by providing an example of a natural and falsifiable way that phenomenal experience might exist.
DeleteHere’s what I sent through email given publishing difficulties. If this comment publishes then I guess the issue has been resolved:
So how specifically might a brain create something which subjectively experiences its existence? An effective starting point here I think would be to consider how it is that computers are known to function in general. They seem to accept the input information that they’re set up to accept, and then run it through algorithms to produce output function. This might be to alter its own algorithmic processes in various ways, or it might be to produce an algorithm that animates the function of a device such as a video screen or a robot.
What this suggest is that in order for the brain to create something that experiences its existence, it must animate various head based mechanisms that are armed with the physics which produce subjective experience. If true then what might these mechanisms be? A great deal is known today about brain function in a physiological sense, so we ought to be observing these mechanisms and yet not realize that they’re also creating subjective experience. In any case why not try manipulating various appropriate brain based mechanisms to see if they alter a given person’s phenomenal experience?
I’m extremely bullish on Johnjoe McFadden’s proposal that certain varieties of synchronous neuron firing create an electromagnetic field that itself exists as a phenomenal experiencer. Observe that every bit of the complexity associated with neuron firing will also exist in the field of electromagnetic radiation that these neurons produce. I don’t know of any other brain based mechanism with such a correlation. So it could be that every bit of the complexity associated with your visual experience right now, for example, was formed by means of properly fired neurons that were in turn incited by input algorithms from your eyes.
McFadden has had some success using the experiments of others to demonstrate that his theory may be right. Apparently the best neural correlate for consciousness we have so far is found in certain kinds of neuron firing synchrony. For example when a person finally recognizes their glasses on a cluttered desk, extra synchrony has been observed. To settle this issue however I think more active experiments will be required. I propose that we implant an appropriate number of transmitters in a volunteer’s head to see if some variety of synchronous firing is able to alter that person’s existing stream of phenomenal experience for report. If phenomenal experience does exist as a certain kind of electromagnetic field, then a tool which we control that alters such a field should tell us whether or not McFadden’s theory seems valid.
(Continued…)
Yes, I read what you sent by email already, so my response is the same. I don't buy that phenomenal experience needs a special physical explanation. I think it's entirely functional and could in principle be realised on any substrate that can compute.
DeleteHmm, my pasted comment doesn’t seem to go through. Maybe this…
ReplyDeleteThis one is without any links, though all of McFadden’s stuff can be found here: https://johnjoemcfadden.co.uk/popular-science/consciousness/
ReplyDeleteAnd maybe there’s a word limit so I’ll break this up.
One question that I should ask about myself is, if someone were to credibly demonstrate that one of my most cherished beliefs depended upon otherworldly dynamics (perhaps if psychological egoism had such implications for example), would I be able to acknowledge this? Or instead would I deny that association given how cherished the belief is to me? In my mind there’s no question here — I would of course give up any belief that was at odds with worldly causal dynamics. But I do have to wonder about my ability to acknowledge such an association given how rare it is for effective contrary arguments to succeed in general among people who take diverging positions. As Julia Galef might say, though we may perceive ourselves to have scout mindsets, in practice soldier mindsets are generally observed.
If that is the case, and if it’s true that you’re highly invested in the idea that subjective experience exists when certain generic information is properly converted into other generic information, then there’s probably no point in me demonstrating that this position seems both spooky and unfalsifiable. Furthermore what chance would there be for iconic figures such as Dan Dennett and Steven Pinker to concede this given the massive investments they’ve made over the years? Mere unnatural implications will surely never convince such people to acknowledge the spookiness of what they endorse. So rather than rely upon arguments such as these exclusively, lately I’ve been thinking that it may be best to focus upon characteristics associated with McFadden’s apparently non-spooky proposal. How might it display an improvement?
One way that some have attempted to quickly dismiss his consciousness as electromagnetic radiation information (CEMI) proposal is to assert that the electromagnetic radiation associated with power lines, mobile phones, and so on should thus affect a person’s phenomenal experience. Of course such effects are not observed. So is this good reason to dismiss CEMI? In his seminal 2002 paper McFadden addressed the physics of why such radiation should not affect what’s observed under his prediction 6.
ReplyDeleteThis got me thinking about how to potentially correspond with the physics he laid out there however to potentially test the validity of his theory empirically. Here I reasoned that apparently enough brain implanted transmitters wired up to fire somewhat like actual synchronously fired neurons do, ought to do the trick. Thus scientists would see if they could fire these fake neurons in ways that would tamper with the subject’s existing stream of phenomenal experience for oral report, and this is given that waves of a certain variety tend to affect other waves of that variety. If it were demonstrated that certain types of firing would reliably affect a person’s phenomenal experience, and validated in many ways with new experiments from that theme, I don’t see how McFadden’s theory could be disputed. Here a paradigm should shift on the order of Newton’s or Einstein’s.
I’ve been geeking out most of Sunday on a new paper that I’ve just noticed that McFadden published. I’m not sure why he didn’t mention this on Twitter. Anyway he now proposes 13 ways to test his theory, 8 of which are already experimentally verified. It seems to me however that none of what he mentions gets specific enough to provide something as indisputable as what I propose. Perhaps he doesn’t propose anything like this because he’s educated well enough to realize how technically challenging it would be for such an experiment to be attempted?
One interesting thing I see from his new paper is that like in 2002, he offers an olive branch to global workspace theorists. Surely they’ll only accept it if stronger evidence becomes displayed that consciousness exists in the form of the right kind of electromagnetic radiation, and so is more than certain generic information that’s properly converted into another set of generic information. Any otherworldly implications of the existing paradigm may simply be denied.
Hi Eric,
DeleteI think one reason people often don't change their mind easily about stuff they've thought about for decades is that they've been thinking about it for decades, meaning they've rehearsed many times the reasons to accept or reject it. I'm not saying they are just set in their ways, I'm saying that they have considered it from every angle and have developed a pretty robust model of how things work. The reason it isn't easy to persuade them they are wrong is that they have likely already considered arguments similar to those presented and found reasons to reject them. The point of debating with people is therefore not to change their minds but to better understand their model of how things work.
Of course, like anyone, I like to think that I have a scout mindset. I like to think that if you could credibly demonstrate the spookiness of subjective experience being functional I would change my mind (not so for unfalsifiable -- I accept it's probably unfalsifiable, but there's no guarantee that a true theory must be falsifiable). I just don't think that anyone has demonstrated this.
I accept as a possibility the idea that EM radiation could have something to do with consciousness in biological brains, although I don't think it likely. This is compatible with my views. I just don't think it is necessary for consciousness on other substrates, because I think any computational substrate can realise consciousness.