Thursday 13 December 2012

Strong AI: The Story So Far


Let's take Strong AI as the viewpoint that the essential capabilities of the brain are computational, meaning that were we to reproduce these same computational functions in an artificial machine such as a silicon computer, then that machine would have a mind just as real, just as conscious as a human mind.

In my recent posts, I have argued against some common criticisms of this view.

I have shown how The Chinese Room may establish that a piece of computational hardware cannot become conscious, but it does not show that the software running on that hardware is unconscious.

I have shown that the concept of a philosophical zombie does nothing to cast doubt on Strong AI, for if Strong AI is true then philosophical zombies are probably logically incoherent. Philosophical zombies are nothing more than a restatement of the viewpoint that Strong AI is not true.

I have shown that qualia, with specific reference to Mary's Room, do not in themselves disprove physicalism, but are probably simply brain states which act as labels for certain percepts and concepts.

I have agreed with Alvin Plantinga that the mind and body are distinct entities (like hardware and software), but argue that this is a conclusion of no consequence which no thoughtful person should be surprised by and which does not imply that the soul can survive the destruction of the body.

I have also explained why I disagree with the assertion that semantics cannot be derived from syntax, and sketched how I think semantics can be viewed as a special case of syntax.

What I have not done yet is given some positive reasons for the suspicion that Strong AI is true. In my following posts I will attempt to do so.

If you are a theist or believe in other supernatural phenomena, my arguments are not for you. If you doubt that it is possible even in principle to design a computer program that behaves as if it were conscious, then I'm not going to persuade you either. These arguments are intended for an audience that is committed to a materialistic world view and that allow the possibility of making a machine that at least appears to be intelligent (weak AI).

Let's clearly spell out our foundational assumptions.

In the following discussions, let's take materialism/naturalism/physicalism as a given, meaning that all phenomena we observe are the result of physical interactions according to natural law.

Let's also assume that all physical interactions can in principle be simulated on a computer (even if such a simulation would be unfeasible to carry out in practice).

Now, if this is so, it must be the case that we could in principle construct a computer which would perfectly simulate human consciousness. This computer program would be capable of passing the Turing Test, would report that it is conscious, and demonstrate human qualities such as creativity, ingenuity, humour, etc.

In particular, I want to stress that this simulated intelligence would have beliefs about itself and about the world. It would also be able to perceive the world through senses, have the appearance of emotional states, etc.

None of these attributes or abilities assume that it is actually conscious. It is trivial to write a computer program that has beliefs, for example. In the simplest form, all that is needed for a belief is some internal representation of the state of the world that may or may not be accurate. A mobile phone believes it is 3.54 pm. A flight computer believes it is travelling at 654 kilometres per hour.

Let's assume that the way the computer achieves the appearance of consciousness is closely modeled on neuroscience, philosophy of mind, etc, so it's not simply reading from a script or performing any similar trickery. The way it works appears to be directly analogous to the way a human mind works.

If we allow ourselves to be extravagant, we could even suppose that the artificial intelligence is implemented by direct simulation of a specific human brain, whether by modelling individual neurons or even down to the level of molecules and particles if necessary.

What the following arguments will attempt to show is that we have reason to believe that such a computer program would be conscious.

12 comments:

  1. Beliefs within computers

    You say:

    “In particular, I want to stress that this simulated intelligence would have beliefs about itself and about the world. It would also be able to perceive the world through senses, have the appearance of emotional states, etc.

    None of these attributes or abilities assume that it is actually conscious. It is trivial to write a computer program that has beliefs, for example. In the simplest form, all that is needed for a belief is some internal representation of the state of the world that may or may not be accurate. A mobile phone believes it is 3.54 pm. A flight computer believes it is travelling at 654 kilometres per hour.”

    That’s only true if the following is true:

    I open my notebook and write in it “John Searle is wrong.” My notebook now believes that John Searle is wrong.

    You are using believe to mean that information to that effect is stored within the device. But “believe” and store information don’t mean the same. Phones and flight computers don’t believe anything. They are designed to give indications of some information to humans, in a form that humans give meaning to. A mobile phone doesn’t ‘know what the time is at all. To take the simplest possible example; a clock doesn’t know what the time is either. A clock is just any object that does the same thing again and again in a regular and observable way. It can be made from water or sand dripping, gears and springs, a device that measure atomic decay or many other technologies. It’s multiply realisably. But in every case, there’s just a steady regular change, say the second hand moves forward on a clock face. . Then we, conscious beings with minds, stick on a label at every point the hand reaches after each second. The clock doesn't know what the label means. The ‘fact’ that that small movement of that pointy bit of metal indicates a second to us is just another observer relative fact that we have attributed to it. Nothing is intrinsically a clock, nd anything can be, given the right causal qualities. After all, we can use the sun as a clock – that doesn’t mean that th sun has beliefs about what the time is!

    I think this a good example of the point about computation being observer relative, not intrinsic to physics or brute reality. Just as nothing is intrinsically a clock, nothing is intrinsically a computer. We attribute that role to it by attributing meaning to the physical changes it goes through, just like we attribute “the time” to the fall of grains of sand in a hourglass

    ReplyDelete
    Replies
    1. Hi Inti,

      I would say your notebook only "believes" the sequence of characters "John Searle is wrong". It does not believe that John Searle is wrong because it has no representation of what John Searle is or for what it would mean for him to be wrong. This means, for example, that it cannot respond to the question "Is John Searle right?"

      But if a computer system represents a proposition in a manner useful to it, e.g. representing the state of the board in a chess computer, then I have no problem saying that it believes that the state of the board is so. And I don't even mean it metaphorically. This is all that belief is to me -- a representation of a proposition which causes a system to behave as if it is true. A notebook doesn't have much behaviour to speak of so it cannot have beliefs to speak of.

      I also have no problem saying that a clock believes it is a certain time (even the system of the sun and the earth). The dramatic difference between a clock believing it is a certain time and a person believing it is a certain time is due only to the richness of the concepts understood by the person. A person will have all kinds of conceptual associations with time -- it being nine o'clock means that bed time is not so far away. All that a clock understands is a sequence of numbers, a semantic web so sparse as to be barely worth considering. To me, the difference is one of power and complexity, not of kind, though the difference is of such a great degree that I would never normally describe a clock as believing. But then, I would never normally describe an atom as a large object, even though it is just that when compared to an electron. Intentionality is like largeness to me. Simple objects have some degree of intentionality though it is so marginal as to make inappropriate the language of intentionality in ordinary conversation.

      However, even though you do not accept that computer systems believe, I encourage you to interpret what I am saying in a metaphorical sense, with an argument to follow that this metaphorical sense is identical to actual belief.

      Delete
  2. The Brain Simulator

    “If we allow ourselves to be extravagant, we could even suppose that the artificial intelligence is implemented by direct simulation of a specific human brain, whether by modelling individual neurons or even down to the level of molecules and particles if necessary.”

    That’s pretty much the original brain simulator response to the Chinese room.I’ve posted the relevant paragraphs below from Searle’s original paper on the Chinese Room.

    First, my own comment based on Searle’s work. Assuming that simulation means by computational means in the usual sense – if it doesn’t then it’s irrelevant to Searle’s arguments, because we are no longer discussion computational theories of mind (see Searle below).

    If you simulate a forest fire, it won’t burn your computer. If you simulate the water cycle, your lab won’t be flooded. And if you simulate a brain working you may get the right outputs, assuming you know how to simulate well enough. But you won’t get consciousness. Simulation is not the same as reproducing. A simulation is a working description of a phenomenon. A reproduction actual gives you the phenomenon itself, with all it’s causal powers. (burning, wetting, being conscious) , For more on why this is true, according to Searle, see my other post.
    This could be a useful way to test theories about how the brain works, provided we know enough about the brain to model it well at present. So it may be a good example of what Searle called “weak A”I; the use of computers to model the brain, as a research method Conceivably, it might even be a good way to create a good functional AI for practical use. So far as strong AI, the claim that minds really are programs (to put it crudely) the brain simualtor is no different from old-fashioned computer based AI, in the important respects. If syntax is insufficient for sematics, and syntax is not intrinsic to physics, that would invalidate a claim that the brain simulator reproduces a mind as opposed to simulating one

    ReplyDelete
    Replies
    1. I am familiar with this argument and find it profoundly unconvincing.

      A simulated forest fire will burn a simulated computer. A simulated water cycle will flood a simulated lab.

      A simulation consists, in essence of a separate reality. What is simulated is real and indeed physical only from a perspective within the simulation. If a conscious mind is simulated, the perspective of that mind will be within the simulation, so it will be real to itself. Since the only perspective that can perceive a mind directly is that mind itself, a simulated mind is just as real as any other.

      Now, of course, this begs the question in that I assume that it is legitimate to place a perspective within a simulation. If Searle is right, then it is not. However, Searle's argument begs the question in exactly the same way -- his argument only makes sense if we assume that a simulation cannot host a perspective. As such it doesn't add anything to the argument at all and merely restates his intuition.

      I've answered this point in more detail and with other arguments on this blog post.

      http://disagreeableme.blogspot.co.uk/2013/08/consciousness-is-not-like-photosynthesis.html

      Delete
  3. Now, I can see a possible comeback to this argument. Here it is from Searle’s Critique of Cognitive Reason, with his response:
    Many people in the cognitive science" scientific paradigm will feel that much of my discussion is simply irrelevant , and they will argue against it as follows :

    “There is a difference between the brain and all of the other systems you have been describing, and this difference explains why a computational simulation in the case of the other systems is a mere simulation , whereas in the case of the brain a computational simulation is actually duplicating and not merely modeling the functional properties of the brain . The reason is that the brain, unlike these other systems, is an information processings system. And this fact about the brain is, in your words , intrinsic ." It is just a fact about biology that the brain functions to process information , and as we can also process the same information computationally , computational models of brain processes have a different role altogether from computational models of, for example, the weather . So there is a well -defined research question : Are the computational procedures by which the brain processes information the same as the procedures by which computers process the same information ?”


    What I just imagined an opponent saying embodies one of the worst mistakes in cognitive science. The mistake is to suppose that in the sense in which computers are used to process information , brains also process information . To see that that is a mistake contrast what goes on in the computer with what goes on in the brain. In the case of the computer, an outside agent encodes some information in a form that can be processed by the circuitry of the computer. That is, he or she provides a syntactical realization of the information that the computer can implement in , for example, different voltage levels. The computer then goes through a series of electrical stages that the outside agent can interpret both syntactically and semantically even though , of course, the hardware has no intrinsic syntax or semantics: It is all in the eye of the beholder . And the physics does not matter, provided only that you can get it to implement the algorithm . Finally, an output is produced in the form of physical phenomena, for example, a printout , which an observer can interpret as symbols with a syntax and a semantics.But now contrast that with the brain . In the case of the brain, none of the relevant neurobiological processes are observer relative (though of course, like anything they can be described from an observer-relative point of view ), and the specificity of the neurophysiology matters desperately

    ReplyDelete
    Replies
    1. I really don't see how the genesis of an information system can have any bearing on whether it is conscious. Either a brain or a computer could spontaneously wink into existence as a result of a freakishly implausible quantum fluctuation (Boltzmann brain style). All that matters for whether consciousness is present is how it is constituted and how it processes information.

      Besides, as I have argued, it is not hard to conceive of evolving algorithms in just as hands off a way as nature evolved brains. This is just a bad argument.

      Delete
    2. Searle contrasts a computer which takes in and spits out symbols which only make sense to a human mind with a brain which processes symbols which makes sense only to itself.

      But we can imagine placing a computer in a robot body and having it navigate a real environment just as a human does. Now it is no longer engaging in mere symbol manipulation but in physical interaction just as we are. The meaning of its inputs and outputs are no longer strictly observer relative because they are coupled to objective physical properties and behaviours. There need be no engineer who understands those symbols -- the engineer could be absent or the system as a whole could have evolved as argued previously.

      Some computationalists seem to think that this is important, that a computer can only be conscious if embodied. This is not my view, but I think it is a good illustration of why Searle's argument doesn't hold water.

      Delete
  4. [ Quote from Searle continued]
    To make this difference clear, let us go through an example. Suppose I see a car coming toward me. A standard computational model of vision will take in information about the visual array on my retina and eventually print out the sentence,"There is a car coming toward me." But that is not what happens in the actual biology . In the biology a concrete and specific series of electrochemical reactions are set up by the assault of the photons on the photo receptor cells of my retina, and this entire process eventually results in a concrete visual experience.

    The biological reality is not that of a bunch of words or symbols being produced by the visual system; rather, it is a matter of a concrete specific conscious visual event- this very visual experience. That concrete visual event is as specific and as concrete as a hurricane or the digestion of a meal. We can, with the computer, make an information processing model of that event or of its production , as we can make an information processing model of the weather, digestion , or any other phenomenon ,
    but the phenomena themselves are not thereby information processing systems.

    In short, the sense of information processing that is used in cognitive science is at much too high a level of abstraction to capture the concrete biological reality of intrinsic intentionality . The "information " in the brain is always specific to some modality or other. It is specific to thought , or vision , or hearing, or touch, for example. The level of information processing described in the cognitive science computational models of cognition , on the other hand, is simply a matter of getting a set of symbols as output in response to a set of symbols as input .

    We are blinded to this difference by the fact that the sentence, "I see a car coming toward me," can be used to record both the visual intentionality and the output of the computational model of vision . But this should not obscure the fact that the visual experience is a concrete conscious event and is produced in the brain by specific electrochemical biological processes. To confuse these events and processes with formal symbol manipulation is to confuse the reality with the model . The upshot of this part of the discussion is that in the sense of "information" used in cognitive science, it is simply false to say that the brain is an information processing device. . . .

    ReplyDelete
    Replies
    1. "A standard computational model of vision will take in information about the visual array on my retina and eventually print out the sentence,'There is a car coming toward me.'"

      This again shows that Searle's conception of a conscious algorithm is hopelessly naive. No computationalist conceives of such an algorithm simply generating sentences. Rather it will process information in a way analogous to how a brain does.

      "it is a matter of a concrete specific conscious visual event- this very visual experience."

      The idea of a concrete conscious event is oxymoronic to me. Conscious experiences are not concrete. There is no detector that can sense a conscious experience, but only the neural correlates of conscious experience. If conscious experiences exist at all, they exist abstractly. They are certainly nothing like hurricanes or the digestion of a meal.

      Delete
  5. Here is what Searle wrote about the Brain Simulator in his original paper , Searle, John. R. (1980) Minds, brains, and programs. Behavioral and Brain Sciences 3 (3): 417-457 accessed from http://cogprints.org/7150/1/10.1.1.83.5248.pdf 18/08/2014

    III. The brain simulator reply (Berkeley and M.I.T.). "Suppose we design a program that doesn't represent information that we have about the world, such as the information in Schank's scripts, but simulates the actual sequence of neuron firings at the synapses of the brain of a native Chinese speaker when he understands stories in Chinese and gives answers to them. The machine takes in Chinese stories and questions about them as input, it simulates the formal l structure of actual Chinese brains in processing these stories, and it gives out Chinese answers as outputs. We can even imagine that the machine operates, not with a single serial program, but with a whole set of programs operating in parallel, in the manner that actual human brains presumably operate when they process natural language. Now surely in such a case we would have to say that the machine understood the stories; and if we refuse to say that, wouldn't we also have to deny that native Chinese speakers understood the stories? At the level of the synapses, what would or could be different about the program of the computer and the program of the Chinese brain?"

    Before countering this reply I want to digress to note that it is an odd reply for any partisan of artificial intelligence (or functionalism, etc.) to make: I thought the whole idea of strong AI is that we don't need to know how the brain works to know how the mind works. The basic hypothesis, or so I had supposed, was that there is a level of mental operations consisting of computational processes over formal elements that constitute the essence of the mental and can be realized in all sorts of different brain processes, in the same way that any computer program can be realized in different computer hardwares: on the assumptions of strong AI, the mind is to the brain as the program is to the hardware, and thus we can understand the mind without doing neurophysiology. If we had to know how the brain worked to do AI, we wouldn't bother with AI. However, even getting this close to the operation of the brain is still not sufficient to produce understanding. To see this, imagine that instead of a mono lingual man in a room shuffling symbols we have the man operate an elaborate set of water pipes with valves connecting them. When the man receives the Chinese symbols, he looks up in the program, written in English, which valves he has to turn on and off. Each water connection corresponds to a synapse in the Chinese brain, and the whole system is rigged up so that after doing all the right firings, that is after turning on all the right faucets, the Chinese answers pop out at the output end of the series of pipes.

    Now where is the understanding in this system? It takes Chinese as input, it simulates the formal structure of the synapses of the Chinese brain, and it gives Chinese as output. But the man certainly doesn’t understand Chinese, and neither do the water pipes, and if we are tempted to adopt what I think is the absurd view that somehow the conjunction of man and water pipes understands, remember that in principle the man can internalize the formal structure of the water pipes and do all the "neuron firings" in his imagination. The problem with the brain simulator is that it is simulating the wrong things about the brain. As long as it simulates only the formal structure of the sequence of neuron firings at the synapses, it won't have simulated what matters about the brain, namely its causal properties, its ability to produce intentional states. And that the formal properties are not sufficient for the causal properties is shown by the water pipe example: we can have all the formal properties carved off from the relevant neurobiological causal properties.

    ReplyDelete
    Replies
    1. In assuming that it is strange for computationalists to consider brain simulation, he shows again that his interpretation of computationalism is hopelessly naive.

      Firstly, by simulating a brain, we could potentially reproduce consciousness without understanding how it works. It is conceivable that it may be easier to simulate a brain at a fine level of detail than it is for any human mind to truly make sense of the incredible complexity of the information processing going on inside it. Simulation can be an aid to learning and can reproduce complex emergent behaviours that at first defy explanation. Just because we can simulate it doesn't mean we understand it completely.

      Secondly, it is indeed the position of computationalists that we can in principle understand the mind without understanding neurophysiology -- but we do not yet understand the mind, and ultimately it may be easier to understand neurophysiology. That doesn't mean AI is pointless. If we can reproduce virtual neurophysiology in computers, we can tweak the results in ways not feasible in the real world so as to attempt to produce superhuman intelligence. It is also not known how quickly such simulations could be run -- given enough hardware perhaps a year of hard mental effort could be compressed to an hour.

      Thirdly, it is possible that we might reproduce intelligence without directly looking at the brain but by building on pure AI research. I don't know if this research program will succeed, but it is possible that we could reproduce human-level intelligence and consciousness in a manner quite unlike biological brains by the refinement and development of these techniques. I don't know. What I do know is that a simulation of a brain would be conscious which is why the brain simulation is the most useful thought experiment.

      But for me, the point of computationalism is distinct from AI. I have no great faith that AI research will ever reproduce anything like human intelligence (although it might). My interest is chiefly metaphysical. I have strong views on what consciousness fundamentally is, and these do not depend on what technologies may arise in future.

      On his "water pipes" example, I am unmoved. My view is the absurd one that the conjunction of the man and the pipes understands. If the man internalises the system in his imagination, then it is still the system that understands, and the system is not the same as the man's conscious mind but a separate mind sitting on top of his the way a virtual machine can exist on the substrate of a physical machine in computer science.

      This argument of Searle's and others are refuted here.

      http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-chinese-room.html#more

      Delete
  6. This comment has been removed by the author.

    ReplyDelete