Let's take Strong AI as the viewpoint that the essential capabilities of the brain are computational, meaning that were we to reproduce these same computational functions in an artificial machine such as a silicon computer, then that machine would have a mind just as real, just as conscious as a human mind.
In my recent posts, I have argued against some common criticisms of this view.
I have shown how The Chinese Room may establish that a piece of computational hardware cannot become conscious, but it does not show that the software running on that hardware is unconscious.
I have shown that the concept of a philosophical zombie does nothing to cast doubt on Strong AI, for if Strong AI is true then philosophical zombies are probably logically incoherent. Philosophical zombies are nothing more than a restatement of the viewpoint that Strong AI is not true.
I have shown that qualia, with specific reference to Mary's Room, do not in themselves disprove physicalism, but are probably simply brain states which act as labels for certain percepts and concepts.
I have agreed with Alvin Plantinga that the mind and body are distinct entities (like hardware and software), but argue that this is a conclusion of no consequence which no thoughtful person should be surprised by and which does not imply that the soul can survive the destruction of the body.
I have also explained why I disagree with the assertion that semantics cannot be derived from syntax, and sketched how I think semantics can be viewed as a special case of syntax.
What I have not done yet is given some positive reasons for the suspicion that Strong AI is true. In my following posts I will attempt to do so.
If you are a theist or believe in other supernatural phenomena, my arguments are not for you. If you doubt that it is possible even in principle to design a computer program that behaves as if it were conscious, then I'm not going to persuade you either. These arguments are intended for an audience that is committed to a materialistic world view and that allow the possibility of making a machine that at least appears to be intelligent (weak AI).
Let's clearly spell out our foundational assumptions.
In the following discussions, let's take materialism/naturalism/physicalism as a given, meaning that all phenomena we observe are the result of physical interactions according to natural law.
Let's also assume that all physical interactions can in principle be simulated on a computer (even if such a simulation would be unfeasible to carry out in practice).
Now, if this is so, it must be the case that we could in principle construct a computer which would perfectly simulate human consciousness. This computer program would be capable of passing the Turing Test, would report that it is conscious, and demonstrate human qualities such as creativity, ingenuity, humour, etc.
In particular, I want to stress that this simulated intelligence would have beliefs about itself and about the world. It would also be able to perceive the world through senses, have the appearance of emotional states, etc.
None of these attributes or abilities assume that it is actually conscious. It is trivial to write a computer program that has beliefs, for example. In the simplest form, all that is needed for a belief is some internal representation of the state of the world that may or may not be accurate. A mobile phone believes it is 3.54 pm. A flight computer believes it is travelling at 654 kilometres per hour.
Let's assume that the way the computer achieves the appearance of consciousness is closely modeled on neuroscience, philosophy of mind, etc, so it's not simply reading from a script or performing any similar trickery. The way it works appears to be directly analogous to the way a human mind works.
If we allow ourselves to be extravagant, we could even suppose that the artificial intelligence is implemented by direct simulation of a specific human brain, whether by modelling individual neurons or even down to the level of molecules and particles if necessary.
What the following arguments will attempt to show is that we have reason to believe that such a computer program would be conscious.