Perhaps we are no more conscious that the automatons we imagine in our thought experiments when considering questions about artificial intelligence and philosophy of mind.
In this view, all we have to explain is why we believe we are conscious, without explaining why we actually are. "Real consciousness" is an illusion, and consciousness is simply comprised of the functions it affords, namely the ability to be self-reflective, to deliberate, to perceive the world, etc., in such a way that the entity believes itself to have a subjective experience or "real consciousness".
If we allow the assumptions of Weak AI, namely that it should be possible in principle to create a computational artificial intelligence which has all the functionality of a human brain, then all of these abilities can be achieved. If we deny Strong AI, then we must suppose that the AI will falsely believe itself to be conscious.
If we understand how artificial intelligences might believe themselves to be conscious, then we can understand by analogy why biological intelligences might share those beliefs. In other words, if we have explained our belief that we are conscious (by analogy to our artificial intelligences), then what, exactly, remains to be explained?