I've taken a long hiatus while I tried to think if there was anything substantive I had left to say about Strong AI. I didn't come up with much, so I'll wrap it up with this post.
I think I have shown that there is no good argument against Strong AI, the view that any sufficiently intelligent and self-reflecting computer system would be conscious in the same sense that we are, or, put in other words, that consciousness is fundamentally a computational process.
The arguments against this view come in a few different flavours.
There are those, such as John Searle, who seem to misunderstand Strong AI proponents as suggesting that physical components of computers can be conscious. He takes the idea of a "conscious computer" too literally to mean a physical object. Instead, it is my view that it is not physical objects (brains) which are conscious but computational processes (minds).
Some arguments are effectively restatements of the hypothesis that Strong AI is false, as in the philosophical zombie thought experiment. These arguments prove nothing but act to articulate and evoke the (false!) intuition that Strong AI is incorrect.
And then there are those arguments, such as the modal argument presented by Alvin Plantinga, that miss the point entirely.
I have not come across any argument specifically against Strong AI that is in any way convincing.
Unfortunately, the same cannot be said for Weak AI. The Gödel argument as popularised by Roger Penrose is much more formidable, and would seem to suggest that it is impossible to construct any truly intelligent system.
To account for human intelligence, Penrose resorts to a belief in exotic uncomputable physics for which there is no evidence at present and yet which must have a profound effect on the brain. Others have used the same argument to claim that Gödel has proven the existence of the soul. In any case the conclusion that Weak AI is impossible is incompatible with physics as we know it, given that humans serve as examples of intelligent physical systems.
If we assume that physics as we know it is broadly correct, then the Gödel argument against Weak AI must be wrong, probably because we are not the transcendentally intelligent beings we think we are, but merely operate within the limits of what Gödel allows.
And while there are other arguments for AI which appeal to me, such as the argument from evolutionary parsimony, it is this assumption of the primacy of physical laws that is most convincing to me.
If physical laws govern the universe entirely, then human beings must be entirely driven by physics. Given that human beings are conscious, intelligent physical systems, it is obviously possible for physical systems to be intelligent and conscious. Of systems designed by humans, only computational processes seem to approach any semblance of intelligence. Furthermore, neuroscience has revealed that the brain does indeed seem to be carrying out a computational process of astonishing complexity, and that disruption of this process leads to impaired intelligence and/or unconsciousness.
In the face of this evidence, it is perfectly obvious to me that human intelligence is fundamentally computational in nature. Even if this were not the case, no credible alternative has been proposed. Even the idea that we have souls doesn't explain anything, as there is no account of how souls make decisions or influence the physical world, or why the ability of the mind to function correctly is so intimately tied to the condition of the physical brain.
I believe in Strong AI because if we accept the basic premise that the universe is governed entirely by physical laws, as it certainly seems to be, then any other conclusion seems to be perverse.
I think I have shown that there is no good argument against Strong AI, the view that any sufficiently intelligent and self-reflecting computer system would be conscious in the same sense that we are, or, put in other words, that consciousness is fundamentally a computational process.
The arguments against this view come in a few different flavours.
There are those, such as John Searle, who seem to misunderstand Strong AI proponents as suggesting that physical components of computers can be conscious. He takes the idea of a "conscious computer" too literally to mean a physical object. Instead, it is my view that it is not physical objects (brains) which are conscious but computational processes (minds).
Some arguments are effectively restatements of the hypothesis that Strong AI is false, as in the philosophical zombie thought experiment. These arguments prove nothing but act to articulate and evoke the (false!) intuition that Strong AI is incorrect.
And then there are those arguments, such as the modal argument presented by Alvin Plantinga, that miss the point entirely.
I have not come across any argument specifically against Strong AI that is in any way convincing.
Unfortunately, the same cannot be said for Weak AI. The Gödel argument as popularised by Roger Penrose is much more formidable, and would seem to suggest that it is impossible to construct any truly intelligent system.
To account for human intelligence, Penrose resorts to a belief in exotic uncomputable physics for which there is no evidence at present and yet which must have a profound effect on the brain. Others have used the same argument to claim that Gödel has proven the existence of the soul. In any case the conclusion that Weak AI is impossible is incompatible with physics as we know it, given that humans serve as examples of intelligent physical systems.
If we assume that physics as we know it is broadly correct, then the Gödel argument against Weak AI must be wrong, probably because we are not the transcendentally intelligent beings we think we are, but merely operate within the limits of what Gödel allows.
And while there are other arguments for AI which appeal to me, such as the argument from evolutionary parsimony, it is this assumption of the primacy of physical laws that is most convincing to me.
If physical laws govern the universe entirely, then human beings must be entirely driven by physics. Given that human beings are conscious, intelligent physical systems, it is obviously possible for physical systems to be intelligent and conscious. Of systems designed by humans, only computational processes seem to approach any semblance of intelligence. Furthermore, neuroscience has revealed that the brain does indeed seem to be carrying out a computational process of astonishing complexity, and that disruption of this process leads to impaired intelligence and/or unconsciousness.
In the face of this evidence, it is perfectly obvious to me that human intelligence is fundamentally computational in nature. Even if this were not the case, no credible alternative has been proposed. Even the idea that we have souls doesn't explain anything, as there is no account of how souls make decisions or influence the physical world, or why the ability of the mind to function correctly is so intimately tied to the condition of the physical brain.
I believe in Strong AI because if we accept the basic premise that the universe is governed entirely by physical laws, as it certainly seems to be, then any other conclusion seems to be perverse.
Hi Disagreeable,
ReplyDeleteI've never had much time for the Chinese Room argument.
Searle makes a related argument that a simulation of consciousness is not consciousness, just as a simulation of a storm isn't a storm and won't make us wet. Massimo endorses this argument, and I'd like to give a brief response to it.
First, I would say that a whole-brain emulation would be a simulation of a brain, not a simulation of consciousness. I'm not saying that a simulation of consciousness would be consciousness, or that a simulation of understanding would be understanding. I'm saying that a full simulation of a brain (at a low enough level of detail) would have consciousness and understanding, in whatever sense the original brain has consciousness and understanding.
Second, when we're talking about logical processes, simulations clearly do involve the same processes as the original system. When a computer emulates another computer doing addition, the emulator is also doing addition. When a computer emulates another computer playing chess, the emulator is also playing chess. Similarly, a whole-brain emulation can do addition, play chess, etc. It would make no sense to say that the emulations are only doing simulated addition or playing simulated chess. So we can't jump too quickly to the conclusion that there is no consciousness and no understanding. Perhaps consciousness and understanding are more like logical processes than physical ones. I for one would say that they are.
I suspect that Searle sees consciousness and understanding as by definition involving something like subjective awareness, and that he sees subjective awareness as being more like a physical process than a logical one.
Hi Richard,
DeleteThanks for the comment.
I agree completely with your analysis and have made similar comments on Massimo's posts on the subject and indeed on this blog.
In particular, even if we allow the biological naturalists the notion of simulated consciousness, my view is that we have no way to know that our own consciousness is not simulated in this way. As such, I think the distinction between simulated consciousness and real consciousness is incoherent.
This view is more fully explored here:
http://disagreeableme.blogspot.co.uk/2012/12/strong-ai-illusion-of-real-consciousness.html
Hi Disagreeable,
ReplyDeleteI don't think I can even allow the notion of simulated consciousness for the sake of argument, as it seems completely meaningless to me. To Searle et al consciousness seems to be an epiphenomenon (in the philosophical sense), i.e. it doesn't produce any effect. A non-conscious AI can have exactly the same behaviour as a conscious human. There's no difference between simulating one and the other. So the thing by which they allegedly differ--consciousness--is not the sort of thing that can be simulated.
I'm not sure that Searle and Massimo are really committed to the idea of simulated consciousness. I suspect what they have in mind is that the simulated brain is a simulation of a conscious brain. But the consciousness is irrelevant here. The simulation is equally well a simulation of a non-conscious system with the same behaviour.
Your argument in the linked post didn't talk about simulated consciousness. So let me interpret you as addressing the claim that a simulated brain doesn't have consciousness at all. Then I basically agree with you. I suspect a critic might respond that you (unlike the simulation) have the evidence of your subjective experience. That's what enables you to know that you're conscious. But an epiphenomenon can't cause you to have a beliefs or thoughts. So even if it could constitute evidence, you would be incapable of responding to that evidence. Your beliefs and thoughts about being conscious cannot be the result of your being conscious.
Hi Richard,
DeleteFirstly, I think I should clarify what I mean by simulated consciousness. I mean the simulation of all the processes that lead to consciousness in a human being. The simulated entity will believe it is conscious, will believe it experiences qualia etc. It will have consciousness in every empirically detectable way. Searle and Massimo claim that nevertheless it doesn't have "real" consciousness, so whatever consciousness it appears to have is merely "simulated".
The idea of simulated consciousness is therefore very similar to the notion of a philosophical zombie, although Massimo vehemently rejects this claim (I think this is because he narrowly interprets philosophical zombies as being physically identical to humans).
>I don't think I can even allow the notion of simulated consciousness for the sake of argument, as it seems completely meaningless to me.<
I agree of course, but in order to do a reductio ad absurdum you have to first assume that which you disagree with.
>A non-conscious AI can have exactly the same behaviour as a conscious human. There's no difference between simulating one and the other. So the thing by which they allegedly differ--consciousness--is not the sort of thing that can be simulated.<
Searle and Massimo would surely disagree. Massimo, in particular, would say that simulated photosynthesis would have exactly the same behaviour as real photosynthesis, but it's not going to give you sugar. For both of these guys (and for me too, actually), consciousness is more than behaviour. They seem to regard consciousness as the physical product of the physical processes going on in the brain, although I wouldn't be surprised if they denied this.
I personally think this is very confused, as consciousness, unlike sugar, is not a physical substance. Massimo in particular doesn't seem to get that, for some reason.
I, on the other hand, think that we need to take account of the logical processes that lead to the behaviour, so even I don't think behaviour is enough. The Turing Test is not for me a perfect test of consciousness, it's just very good evidence for it. I can just about conceive of an AI which appears to be conscious but isn't, however such an AI would have to perform information processing in a way quite dissimilar to human (e.g. a massive, precomputed conditional lookup table of responses to every conceivable situation).
>Your argument in the linked post didn't talk about simulated consciousness. <
Well, although I don't use that phrase, that's precisely what the post is about. Maybe you mean something other than what I do by the phrase.
>Your beliefs and thoughts about being conscious cannot be the result of your being conscious.<
...unless consciousness is merely the conjunction of all of those abilities, perceptions and illusions that lead to those beliefs and thoughts!
Again, I think we're essentially in perfect agreement. We just interpret terms a little differently and might approach debating the issue from different angles.
Hi Disagreeable,
DeleteYes, we're pretty much in agreement. But I'm objecting to your terminology in one respect.
You wrote: "Firstly, I think I should clarify what I mean by simulated consciousness. I mean the simulation of all the processes that lead to consciousness in a human being."
I get that this is what you mean. But I think this choice of terminology is misleading. We should be careful to distinguish between "consciousness" and "processes that lead to consciousness". "Simulated consciousness" implies that we're simulating consciousness, rather than simulating the processes that lead to consciousness.
BTW I've just read a paper (http://www.imprint.co.uk/pdf/searle-final.pdf) in which Searle denies being an epiphenomenalist. But I don't think he makes much sense.
He writes:
"Consciousness is thus an ordinary feature of certain biological systems, in the same way that photosynthesis, digestion, and lactation are ordinary features of biological systems."
But he also writes:
"What is the difference between consciousness and other phenomena that undergo an ontological reduction on the basis of a causal reduction, phenomena such as colour and solidity? The difference is that consciousness has a first-person ontology; that is, it only exists as experienced by some human or animal, and therefore, it cannot be reduced to something that has a third-person ontology, something that exists independently of experiences."
If consciousness differs from other phenomena in having a "first-person ontology", then it is hardly an "ordinary feature". It is exactly this alleged first-person ontology that exercises philosophers so much, and it seems rather cavalier to wave it away like this. More to the point, if consciousness only exists as first person experience, with no further effect, it seems to be epiphenomenal.
"But if consciousness has no causal powers in addition to its neurobiological base, then does that not imply epiphenomenalism? No. Compare: the solidity of the piston has no causal powers in addition to its molecular base, but this does not show that solidity is epiphenomenal (try making a piston out of butter or water)."
It's clear what causal powers solidity has. It's not clear what causal powers consciousness (as Searle sees it) has. It's no use saying that consciousness gets its causal powers from "its neurobiological base", if it doesn't actually have any causal powers! Searle needs to tell us what causal powers he thinks consciousness has, and then we can judge whether those powers could come from some neurobiological base.
Perhaps Searle will claim that consciousness has the power to cause understanding. But from his point of view understanding seems to be just another purely first-person phenomenon. It doesn't seem to have any effect on the outside world. If an AI (or Chinese Room) can answer questions just as well as a person, despite not understanding them (as Searle apparently maintains) then understanding is of no practical use!
Once again, I agree with you completely.
DeleteI can forgive Massimo on the grounds that he hasn't made his career thinking about the philosophy of mind, and so perhaps has not really thought sufficiently deeply on the subject to realise where he's going wrong, but it astounds me that Searle gets away with such blatantly flawed reasoning.