Monday, 17 December 2012

Strong AI: The Illusion of Real Consciousness


Perhaps we are no more conscious that the automatons we imagine in our thought experiments when considering questions about artificial intelligence and philosophy of mind.

In this view, all we have to explain is why we believe we are conscious, without explaining why we actually are. "Real consciousness" is an illusion, and consciousness is simply comprised of the functions it affords, namely the ability to be self-reflective, to deliberate, to perceive the world, etc., in such a way that the entity believes itself to have a subjective experience or "real consciousness".

If we allow the assumptions of Weak AI, namely that it should be possible in principle to create a computational artificial intelligence which has all the functionality of a human brain, then all of these abilities can be achieved. If we deny Strong AI, then we must suppose that the AI will falsely believe itself to be conscious.

If we understand how artificial intelligences might believe themselves to be conscious, then we can understand by analogy why biological intelligences might share those beliefs. In other words, if we have explained our belief that we are conscious (by analogy to our artificial intelligences), then what, exactly, remains to be explained?

Strong AI: Evolutionary Parsimony


In this post, I will attempt to make an argument for Strong AI from an evolutionary perspective. Please keep the assumptions stated in my previous post in mind.

Thursday, 13 December 2012

Strong AI: The Story So Far


Let's take Strong AI as the viewpoint that the essential capabilities of the brain are computational, meaning that were we to reproduce these same computational functions in an artificial machine such as a silicon computer, then that machine would have a mind just as real, just as conscious as a human mind.

In my recent posts, I have argued against some common criticisms of this view.

I have shown how The Chinese Room may establish that a piece of computational hardware cannot become conscious, but it does not show that the software running on that hardware is unconscious.

I have shown that the concept of a philosophical zombie does nothing to cast doubt on Strong AI, for if Strong AI is true then philosophical zombies are probably logically incoherent. Philosophical zombies are nothing more than a restatement of the viewpoint that Strong AI is not true.

I have shown that qualia, with specific reference to Mary's Room, do not in themselves disprove physicalism, but are probably simply brain states which act as labels for certain percepts and concepts.

I have agreed with Alvin Plantinga that the mind and body are distinct entities (like hardware and software), but argue that this is a conclusion of no consequence which no thoughtful person should be surprised by and which does not imply that the soul can survive the destruction of the body.

I have also explained why I disagree with the assertion that semantics cannot be derived from syntax, and sketched how I think semantics can be viewed as a special case of syntax.

What I have not done yet is given some positive reasons for the suspicion that Strong AI is true. In my following posts I will attempt to do so.