Tuesday 6 November 2012

In Defence of Strong AI: Philosophical Zombies Refuted

One concept that often comes up in discussions about consciousness and strong AI is that of the philosophical zombie.

Generally, a philosophical zombie is something which looks like and behaves like a human being, but has no conscious experience of the world. It may deceive us by protesting that it is conscious and claiming that it does feel pain and experience sensations just as we do, but it is just an automaton mimicking a real sentient being. It is no more aware than a character in a novel or a two-line computer program that prints to the screen the message "Hello World! (I'm conscious, by the way)".

Opponents of strong AI will often bring up the idea of a philosophical zombie to illustrate that it is conceivable that something can behave as though it were conscious while actually having no inner experiences whatsoever. David Chalmers, in The Conscious Mind, discusses the idea of an entire world, perhaps indistinguishable from our own, which is populated entirely by philosophical zombies.

While those who bring up the concept may not claim that philosophical zombies are possible in this universe, they do maintain that the concept is at least logically possible or coherent. I disagree.

If it is true that philosophical zombies are logically possible, then proponents of strong AI have a serious problem to overcome in arguing that sentient computers (e.g. a computer which can pass the Turing test), would actually be conscious. Until we discover some new scientific principle of consciousness, there is no reason to think that any artificial system we implement would be anything but a philosophical  zombie.

I want to focus first on what it means for something to be logically possible or conceivable. I take conceivability to mean that something can be imagined and discussed, albeit imprecisely. It helps if the idea is not easily dismissed. We might raise the idea of a perfectly round square, but it turns out rather trivially that this idea is nonsensical so we might call it inconceivable.

What about the concept of the greatest prime number? The concept is well-defined and perfectly reasonable on the face of it, so I would call it conceivable. As it happens, Euclid proved that there was no such thing and there are actually infinitely many prime numbers. This concept is therefore logically impossible.

So, I will allow that the concept of a philosophical zombie is conceivable. However, I do not accept that this means that it is logically possible, and in fact I strongly suspect that it is not.

For anything to be logically possible, firstly it must be well-defined. The concept of a philosophical zombie is a little vague in my view. What does it mean for it to behave like a human? I think we need a sketch of a definition for its behaviour, because if we don't define its behaviour then the concept itself is not well-define. Let's assume this behaviour is not scripted -- the zombie is not a character in a novel, but an object in the real world which reacts dynamically to its environment and situation. In order to achieve this result, there must be some incredibly sophisticated algorithm that determines its behaviour. This algorithm could be implemented by a carefully designed computer program, or even by a computer program that simulates an actual particular organic brain we have scanned with future technology.

But the view of strong AI proponents is that such algorithms would be conscious. Perhaps consciousness arises naturally by executing algorithms of a certain complexity and organisation. Perhaps consciousness is simply a word which describes what it is like to actually be a complex, self-reflecting algorithm, and perhaps that's all we are.

Now, if this view is correct (and I clearly haven't demonstrated that it is) then the philosophical zombie argument would appear to be logically impossible, for if the algorithm defining the behaviour of a zombie is sophisticated enough to behave exactly like a real human, then that algorithm would necessarily be of the sort which would give rise to consciousness. In other words, you couldn't have a zombie without having a conscious algorithm driving it, but since philosophical zombies are supposed to have no consciousness, they would be logically impossible.

So we have the result that if strong AI is correct, then philosophical zombies are most likely impossible, while if philosophical zombies are logically possible, then strong AI is most likely incorrect.

The two ideas are very probably perfect opposites. I say "probably" because there is also the dubious position that the same human behaviour could be achievable with either conscious or unconscious algorithms.

In any case, we can't use the concept of philosophical zombies to disprove strong AI without first proving that philosophical zombies are logically possible, and we probably can't do that without disproving strong AI. Any argument relying on philosophical zombies is therefore necessarily circular and useless.

8 comments:

  1. I like the recent posts on mind and defending strong AI!

    I agree with your conclusion, here, that the conceivability of philosophical zombies does not refute materialism and strong AI. However, I am still deeply troubled by the zombie argument, because I think it DOES PROVE that computational materialism is incomplete and has not (yet) captured the nature of consciousness with an adequate explanation.

    For, it seems, if we had done so--i.e. if a conception of how qualia emerges from materialistic computations existed--then conceiving of philosophical zombies ought to be as absurd as conceiving of seasons as caused by the emotional states of Geek deities. I.e. we have explained, mechanically, how seasons "emerge" from the physical dynamics of planetary atmosphere and tilt (in relation to the sun), etc., and so a conception which contradicts this, like that of greek gods causing seasons, becomes plainly absurd.

    But there is yet no physical explanation which makes philosophical zombies seem similarly absurd. The trouble is that we can (as Frank Jackson points out with his "Color Blind Mary" argument) go through all the neuronal mechanics of a human brain, and we would never discover the existence of consciousness--without recognizing a causal correspondence of those mechanics to ones own subjective conscious states of mind.

    Thus, it seems that all we have is an inductive argument which says that because materialistic explanations of the world have won out (with the success of science) in all other major areas of philosophical inquiry, then so likely will with concern to explaining consciousness. (But it as yet remains seemingly impossible to imagine how this will be done, and so possible that computational materialism is metaphysically flawed.)

    ReplyDelete
    Replies
    1. This comment has been removed by the author.

      Delete
    2. Hi Brad, I'd suggest that not only is computational materialism incomplete, it's quite unnecessary. We don't need it to develop a thoroughly naturalistic science and philosophy of the conscious mind. In The Rediscovery of the Mind, John Searle discusses how the materialist tradition (strictly understood, not as a synonym for naturalism) has made 'the same mistakes over and over again in philosophy of mind. In effect, computational naturalism is trying to solve problems that don't exist, but are simply products of its misconceptions.

      I'd say that looking at his arguments for biological naturalism, and separating them from the many misrepresentations of them that are passed about, we can clarify what the issues really are.

      Regards

      Delete
    3. Brad is long gone, unfortunately!

      Delete
  2. Thanks for the encouragment!

    It's no great surprise that computational materialism is incomplete - we have not yet created a sentient computer. I don't think that the philosophical zombie argument proves anything we didn't already know.

    Faster than light travel seems to be impossible, but we don't find the concept inherently absurd. Similarly, even if we understood consciousness completely in terms of computations, the concept of philosophical zombies wouldn't automatically become absurd.

    The Mary's room thought experiment proves that there is a difference between knowing how the brain works intellectually and being familiar with the subjective experience of being that brain. It probably does show that qualia are phenomena that exist in some sense (even if not physically), but it doesn't show that qualia cannot be the result of computations.

    If you understood a human brain completely, you would be able to predict that that brain would report experiencing consciousness, and so you would in a sense have discovered consciousness even without inferring it from your own conscious experience - you just wouldn't know what it would feel like to be that brain.

    A counter-argument to that might be that if you understood a chimpanzee's brain completely, that brain would probably be conscious and yet be incapable of recognising and reporting the concept of consciousness, so I can't claim that reporting consciousness is the same thing as consciousness.

    So, another tack. We would perhaps never truly understand consciousness if we were not conscious ourselves for the reason that consciousness is not a physical phenomenon that is present in the universe. By that I mean it is not something that can be measured in any sense, and not something that has any causal effect on anything. In a very real sense, it does not exist from the point of view of an unconscious observer. We should not expect it to be predicted by science simply because it is not a "real" phenomenon in this sense.

    And yet, subjectively, it does exist. This is going to get into some thoughts that come from years of pondering the consequences of the MUH now so I suspect I'm going to sound crazy, and it's going to raise more questions than it answers, but here's how I would resolve this:

    Mathematical constructs require nothing to sustain them. They just exist, realities unto themselves. The universe is such a mathematical construct.

    But the mind is also such a mathematical construct. Consciousness and qualia are real phenomena within this construct, just as atoms and photons are real phenomena within the construct of the universe. However, consciousness is not a real phenomenon with respect to the physical universe.

    In this view, the mind is not actually part of the physical universe, so anything which exists only within the reality of the mind (qualia etc) do not exist and cannot be detected within the reality of the physical universe.

    When we see brains which are "conscious", this is like seeing heaps of sand which are "conical". The platonic mathematical construct of a cone does not actually physically exist as an object within the universe, but there are many physical representations and uses of this concept within the universe. So, in a way, a mind doesn't really exist _within_ the universe but independently of it, and a brain is just a physical representation and exploitation of that mind which has been found by a process of evolution.

    Consciousness must simply be what it feels like to be a certain kind of algorithm. Studying the brain will no more tell you what it feels like to be conscious than studying geology will tell you what it feels like to be a rock. But that's OK, because consciousness doesn't "really" exist.

    ReplyDelete
    Replies
    1. I think Brad means that it shows that computational materialism is incomplete in a conceptual sense, while you reply addresses the degree of our empirical knowledge, which is a different issue.

      Delete
  3. I believe that consciousness is repeated feedback from sensory inputs somewhat like an infinitely recursive mirror image. To have artificial intelligence, which is *different* than consciousness, we don't need consciousness. Therein lies the heart of the paperclip problem.

    ReplyDelete
  4. This comment has been removed by a blog administrator.

    ReplyDelete