In this post, I will attempt to make an argument for Strong AI from an evolutionary perspective. Please keep the assumptions stated in my previous post in mind.
If a simulated brain can exhibit all the intelligence and emotions of a human brain, then all we would have needed to succeed as a species would be a simulated consciousness of this sort. There is no need for evolution to invent something so extravagant as "real" consciousness when simulated consciousness would do.
So, assuming we really do have this mysterious property of real consciousness, we need to account for this. Some possibilities might include:
- We have real consciousness as the result of an evolutionary accident, or independently of evolutionary justification.
- The simplest way to get intelligent entities which appear to be conscious is for them to actually be conscious.
- Consciousness is a necessary side effect of the computations performed by our brains.
This does not seem to be a good description of the unique and mysterious phenomenon of consciousness, which doesn't seem to be solving any particular problem at all (recall our assumption that we could in principle design a simulated mind without real consciousness which was behaviourally identical). No other example occurs to me of something so remarkable but entirely non-adaptive having arisen during the course of evolution, although this may be a failure of imagination on my part.
But since our consciousness has no impact on our behaviour, then it seems to be that the assertion that we are conscious is no more or less justified than the assertion that bacteria are conscious. If consciousness cannot be detected by behaviour and has no function, having evolved by accident, then might this accident not have occurred long before the development of brains?
In fact, might it not be the case that all matter is conscious? Perhaps consciousnes has nothing to do with life at all, but is intrinsic to reality? This view, which I do not hold myself, is called panpsychism, and is held seriously by some philosophers. If you rely on option one for your explanation, then I think you may want to consider joining their ranks.
But if you are a panpsychist, then it could well be that artificial intelligences could also be conscious.
There are two ways to interpret the second option. One way might be that nature has short-circuited the difficult task of evolving a purely computational intelligence by exploiting some as yet understood principle of nature by which it can achieve consciousness. The problem with this view is that we suppose that our simulation is modeled on human consciousness -- our simulated brain has the same functional behaviour as a natural brain but is no more complex. Clearly consciousness is not required for its function.
Another way of interpreting the second option is that consciousness is simply a natural result of certain types of brain organization that lead to intelligent behaviour -- perhaps you would have to design something even more expensive and complex to have all this behaviour without the associated consciousness.
The third option can be seen as a more strongly worded version version of this interpretation of the second. In this view, it is impossible to carry out the kind of computations the brain performs without having consciousness experienced by a real mind. In order for an entity to believe it is conscious and behave as if it is conscious (passing all tests we might devise), then it must really be conscious.
As far as I can see, if you deny the possibility of Strong AI, then there is no plausible explanation for why we evolved the capacity for "real" consciousness.
Well then, perhaps we are not really conscious at all, but only think we are! I'll explore this mad idea in the next post.