Perhaps we are no more conscious that the automatons we imagine in our thought experiments when considering questions about artificial intelligence and philosophy of mind.
In this view, all we have to explain is why we believe we are conscious, without explaining why we actually are. "Real consciousness" is an illusion, and consciousness is simply comprised of the functions it affords, namely the ability to be self-reflective, to deliberate, to perceive the world, etc., in such a way that the entity believes itself to have a subjective experience or "real consciousness".
If we allow the assumptions of Weak AI, namely that it should be possible in principle to create a computational artificial intelligence which has all the functionality of a human brain, then all of these abilities can be achieved. If we deny Strong AI, then we must suppose that the AI will falsely believe itself to be conscious.
If we understand how artificial intelligences might believe themselves to be conscious, then we can understand by analogy why biological intelligences might share those beliefs. In other words, if we have explained our belief that we are conscious (by analogy to our artificial intelligences), then what, exactly, remains to be explained?
If we need to explain also why this belief is true, why we are "really" conscious, then that begs the question that our beliefs are more accurate than those of our artificial counterparts.
It seems to me that if we hold in doubt the consciousness of an artificial entity that seems to be conscious, and indeed proclaims its consciousness, then no entity is immune from similar doubts - even oneself! If there is no way for robots or roboticists to distinguish real consciousness from the false belief in consciousness, then the same is true for people and neuroscientists.
While philosophers often claim that the one thing we cannot seriously doubt is our own consciousness, I suggest that it is entirely reasonable to believe that the consciousness we experience is an illusion or belief held by a computational intelligence. If that isn't real consciousness, then perhaps there is no such thing as real consciousness.
In any case, you could use exactly the same argument to try to convince an AI that the consciousness it perceives must be real. As such, I think the argument is just as useful in proving strong AI as it is in refuting it. The insistence that consciousness is a real phenomenon, more than a set of mechanical tricks, must therefore be regarded as suspect.
If you are an empiricist, then I would ask you whether you have any more evidence for the reality of your own consciousness than a simulated consciousness would have. Disregard your intuition if you can. If we can show that the computations produced by your brain would result in your belief that you are conscious, then how could you know whether this belief were true or not?
I suggest that you could not, and in fact the distinction between believing yourself to be conscious and actually being conscious is moot.
To be clear, my position is most certainly not that human beings are unconscious automatons! Instead, my position is that for an entity to be conscious it must simply believe itself to be conscious in much the same we do. If it believes it is conscious and behaves as if it is conscious, then it really is conscious because this is all consciousness is. The illusion is "real" consciousness, that ill-defined notion of something greater than mere computation.