Wednesday 28 November 2012

In Defence of Strong AI: Meta-meaning

Yesterday's post was quite long and wide-ranging, with a lot of examples etc. It focused specifically on how it might be possible to get semantics from syntax, particularly with reference to The Chinese Room thought experiment.

However I feel that I could sum up my position more succinctly by looking at the meaning of the word "meaning" itself.

Tuesday 27 November 2012

In Defence of Strong AI: Semantics from Syntax

Is it possible to get semantics from syntax?

If it were impossible to get semantics (meaning) from syntax (symbols and formal rules for manipulating them) then we probably wouldn't have bothered broadcasting the Arecibo image to the universe in the hope of communicating with aliens.

Without any way to explain the meaning of this image, it seems to consist of syntax alone. Whether it could be deciphered by aliens in practice or not, it at least seems possible in principle, which seems to suggest that it may be possible in some cases to derive semantics from syntax.

The assertion that you can't, that there is no way to get from syntax to semantics, is perhaps the strongest response from John Searle to the systems/virtual mind refutation of his Chinese Room thought experiment (as outlined in my post on the topic).

Searle illustrates his point by saying that if he is in The Chinese Room and has a formal system of rules (syntax) for manipulating Chinese words, he has no way to get from this syntax to the semantics of what is being said. If he has no way to do this, then how can the system do it?

In Searle's view, the systems/virtual mind reply to the Chinese Room is just hand-waving until this is explained.

Let's see if I can attempt to do just that.

Tuesday 20 November 2012

In Defence of Strong AI: The Modal Argument Refuted


In this video, philosopher and Christain theologian Alvin Plantinga outlines his modal argument, which in my view successfully establishes that he is not identical with his brain or body.

He interprets this to mean that he is something more than physical, and presumably that he has a soul. Physicalism refuted? Nope!

In Defence of Strong AI: Qualia and Mary's Room

Photo: brokenview

The Mary's Room thought experiment considers a brilliant neuroscientist who has been raised and educated in an entirely colourless environment. It was originally proposed by Frank Jackson as follows:

Mary is a brilliant scientist who is, for whatever reason, forced to investigate the world from a black and white room via a black and white television monitor. She specializes in the neurophysiology of vision and acquires, let us suppose, all the physical information there is to obtain about what goes on when we see ripe tomatoes, or the sky, and use terms like ‘red’, ‘blue’, and so on. She discovers, for example, just which wavelength combinations from the sky stimulate the retina, and exactly how this produces via the central nervous system the contraction of the vocal cords and expulsion of air from the lungs that results in the uttering of the sentence ‘The sky is blue’. [...] What will happen when Mary is released from her black and white room or is given a color television monitor? Will she learn anything or not?

On the day she is allowed to emerge from her confinement, she sees the colourful world for the first time. "Wow!" she says, as she takes it all in. Though she knew everything there was to know about the human brain and everything there was to know about the physics of colour, she had never before known what it was like to experience it first-hand.

But if she now knows something she didn't know before, then evidently she could not have known everything there was to know about colour before her escape. This surely proves that conscious experience transcends physics and neuroscience. Physicalism (the belief that everything is the result of physical interactions) must simply be wrong. Right?

Well, not really...

Tuesday 6 November 2012

In Defence of Strong AI: Philosophical Zombies Refuted

One concept that often comes up in discussions about consciousness and strong AI is that of the philosophical zombie.

Generally, a philosophical zombie is something which looks like and behaves like a human being, but has no conscious experience of the world. It may deceive us by protesting that it is conscious and claiming that it does feel pain and experience sensations just as we do, but it is just an automaton mimicking a real sentient being. It is no more aware than a character in a novel or a two-line computer program that prints to the screen the message "Hello World! (I'm conscious, by the way)".

Opponents of strong AI will often bring up the idea of a philosophical zombie to illustrate that it is conceivable that something can behave as though it were conscious while actually having no inner experiences whatsoever. David Chalmers, in The Conscious Mind, discusses the idea of an entire world, perhaps indistinguishable from our own, which is populated entirely by philosophical zombies.

While those who bring up the concept may not claim that philosophical zombies are possible in this universe, they do maintain that the concept is at least logically possible or coherent. I disagree.

Friday 2 November 2012

In Defence of Strong AI: The Chinese Room Refuted



It has become obvious to me that in order to fully explain my world view, I must first defend strong AI.

Strong AI is the position that it is possible in principle to have an artificial intelligence, implemented as a computing machine much like the machines we have today, which would be capable of consciousness, feeling and sensation. This is in contrast to the position of weak AI, which only claims that we could in principle make a computing machine which would merely behave as if it were conscious.

This is important not just for the ethics of how we might treat sentient computers but because it cuts to the heart of what it is that our minds actually are. If our minds are dependent on something other than simple computation, this puts limits on certain questions we might ask about our universe. For example, if computation cannot produce consciousness, then we immediately know that we cannot all be living in a computer simulation.

Firstly, I would like to point out the problems I see with some of the most popular criticisms of strong AI, starting with perhaps the most famous: John Searle's "The Chinese Room". Later I will attempt to build a positive case of my own.