Tuesday 27 November 2012

In Defence of Strong AI: Semantics from Syntax

Is it possible to get semantics from syntax?

If it were impossible to get semantics (meaning) from syntax (symbols and formal rules for manipulating them) then we probably wouldn't have bothered broadcasting the Arecibo image to the universe in the hope of communicating with aliens.

Without any way to explain the meaning of this image, it seems to consist of syntax alone. Whether it could be deciphered by aliens in practice or not, it at least seems possible in principle, which seems to suggest that it may be possible in some cases to derive semantics from syntax.

The assertion that you can't, that there is no way to get from syntax to semantics, is perhaps the strongest response from John Searle to the systems/virtual mind refutation of his Chinese Room thought experiment (as outlined in my post on the topic).

Searle illustrates his point by saying that if he is in The Chinese Room and has a formal system of rules (syntax) for manipulating Chinese words, he has no way to get from this syntax to the semantics of what is being said. If he has no way to do this, then how can the system do it?

In Searle's view, the systems/virtual mind reply to the Chinese Room is just hand-waving until this is explained.

Let's see if I can attempt to do just that.

You don't learn to catch a ball by consciously applying the equations of physics, applied mathematics and geometry to your visual perceptions in order to know where to position your hand. These calculations are performed effortlessly by your brain with no conscious awareness.

Similarly, Searle should not expect to have an intuitive understanding of Chinese by following the syntax of The Chinese Room. The difficulty of consciously performing the calculations going on in a brain belies the effortlessness with which these calculations are performed by the brain itself. Just as catching a ball is effortless, the construction of semantics by a virtual mind from the formal rules of The Chinese Room may be trivial. In fact, it may be inevitable.

My suspicion is that minds are syntax (in the sense of formal systems of rules), and that the meaning perceived by those minds does in fact come from that syntax alone. In this post, I will attempt to paint a picture of how this is a coherent view, and so to prove that the assertion proposed by John Searle is dubious.

Definitions

Suppose we have a statement, such as
1+1=2
The meaning of this statement is easily understood by anybody with a rudimentary education. The full meaning of the statement depends on understanding concepts such as natural numbers, equality and addition.

The semantics of a statement are the concepts that are being expressed (the meaning).

The syntax refers to the way the meaning is expressed, usually using a set of predefined symbols according to a system of agreed grammatical rules.

The same semantics or meaning could be expressed in lots of different ways according to different syntactic systems or usages of those systems.
One plus one equals two
You get two when you add one to one
Un plus un égale deux 
In every case, the semantics are clear and the same.

However I think Searle's meaning of syntax is a little broader than the simple grammatical rules of representation according to some language. He would, I believe, also include the formal rules that would allow you to operate on these symbols, e.g. to assess whether the statements were true or not.

Assume you do not have to understand the meaning of the rules in order to put them into operation -- all you have to do is mindlessly follow some operation such as looking up addition tables.

So while 1+1=3 may be syntactically valid according to the grammatical rules of arithmetic expressions, it would not be valid under this broader definition that would include the formal rules of arithmetic.

I will continue to use this sense of syntax to refer to formal systems of potentially complex rules throughout this post.

The syntax refers to the system of rules and symbols that determine how to produce representations of concepts and how to perform operations on those representations.

Searle's view (the intuitive position)

Searle's point of view is that syntax is meaningless until you attach meaning to it.

Consider the statement
J^J_3<
To all intents and purposes, that appears to be a meaningless piece of junk text. It has no clear semantics.

I could give you a system of grammatical rules within which this statement is legal. I could tell you there is a set of symbols that you can put together in certain patterns, and when you follow the rules you have created syntactically valid statements.

Though you now have the ability to generate these statements, and even assess whether these statements are valid or not, you have no way of determining if these statements actually mean anything. Until I give you some sort of semantic mapping, you have no way to infer what concepts are being represented.
J means 1
^ means +
_ means =
3 means 2
< means end of statement
Therefore J^J_3< means 1+1=2, but you had no way of knowing until I told you. There is, after all, no way to get semantics from syntax. Right?

Well, maybe there is...

My view (the unintuitive position)

There is no difference between syntax and semantics.

Perhaps I'm just saying that to get your attention. What I actually believe could be expressed in a number of different ways, but these statements sum it up.
  1. Syntax always implies at least "limited" semantics.
  2. Syntax may imply "real" semantics
  3. "Real" semantics are just complex syntactical constructions in human minds
Syntax always implies at least limited semantics.
No matter how simple or abstract your syntactical system, from a certain point of view, that system must always have semantics in some limited sense.

For example, lets say we have a set of symbols A {1,2,3} and a set of symbols B{+,-}, and that our syntax allows us to create valid statements of the form ABA, such as 1+2, 3-3 etc. 5+1 is not valid, and 1+1=2 is not valid.

In fact, to make sure we forget the meaning of these symbols (we're trying to define a formal abstract syntax which is as devoid of meaning as possible), let's replace these symbols with less familiar ones.

Let's say A is {!,",£} and B is {%,^} so our valid statements look like !%".

So now we have a formal syntactical system which has no semantics, right?

Not quite. Because in the statement !%", the % symbol, for example, could be said to have the semantics of "a separator of elements of A".

So, what I'm saying is that syntax always implies at least limited semantics, because in order to have a syntactical system each symbol or group of symbols must have some limited "meaning" (or "role" if you prefer) when interpreted in the context of that syntax. The rules of a language may not directly imply the semantics of concepts such as addition or subtraction, but they do at least imply the semantics of how to use symbols within that langauge.

This may not be "semantics" as others would use the term, but it is semantics of a sort.
Syntax may imply "real" semantics
But this was a very simple and pretty useless example system. If we had a much more complex set of rules, which distinguished elements of A and B from one another and introduced the equality operator, we might have a more useful system capable of expressing concepts which we use and understand every day.

If our more advanced syntax allows us to assess all true arithmetical statements such as "1+1=2" as valid but recognise as invalid all false statements such as "1+1=3", then my contention is that this system does actually imply the semantics of addition, equality etc.

Now, if these symbols were replaced by more alien ones, and especially if the grammatical rules were changed so as to be utterly different, then the task of recognising these familiar concepts embedded within the syntax becomes much more difficult. It might take a very clever person to spot that the inherent semantics of this syntax are the same as our concepts of addition, subtraction etc.

And yet, for more simple examples at least, it might still be possible to do so (again, see the Arecibo message). If it is possible in principle to derive meaning from syntactical systems in some cases, and especially if a concept is completely and perfectly captured in a syntax, then allow me to propose that in these cases the full semantic content of the statements are in fact directly implied by the syntax -- the "true semantics" are identical to the "syntactical semantics". When a mathematician discovers that an alien syntax represents addition, all she is doing is recognising this identity. She is not conferring meaning that was not already there.

For very complex systems (e.g. The Chinese Room), it may be impossible in practice to map the concepts expressed to familiar ones, even if such mappings exist. As such, no mathematician investigating the systems would be capable of recognising the result of syntactical operations as statements which have meaning beyond that syntax (e.g. Searle doesn't understand Chinese). Even so, these systems would have their own internal semantics of syntax as outlined above.

So, in some cases, a mathematician can recognise the semantics of a syntax and in other cases she cannot. What does this ability to recognise have to say about whether the semantics of a syntax are actually meaningful or not?

Nothing!

As I have pointed out, all syntax is inherently meaningful within the context of the rules of that syntax. If the semantics of a syntax happens to be identical to a concept familiar to humans, this is true whether or not humans can appreciate this fact.
"Real" semantics are just complex syntactical constructions in human minds
If syntax can imply semantics unambiguously, then those semantics are inherent in the syntax. This is relatively easy to understand for simple concepts such as arithmetic or the rules of propositional logic. It is harder to see how this could be the case for statements such as "The weather outside is frightful".

In propositional logic, the statement "P v Q" means "P is true or Q is true". It is my view that this meaning is inherently implied by the syntax of propositional logic and the truth tables of the "v" (or) operator.

And yet when we explain propositional logic to students, we usually give examples which confer additional semantics to these examples.
P means "It will be warm tomorrow". Q means "I will wear a jumper tomorrow". P v Q means "Tomorrow will be warm or I will wear a jumper."
Clearly these additional semantics are not implied by the formal syntax of propositional logic, and yet we can use propositional logic to make deductions of a very complex nature with real world implications. We take the output from the formal systems and infer meaning from the results, meaning that does not come from the formal systems themselves. The semantics do not come from the syntax alone in these cases.

And this is how we use computers. When we program a computer to simulate something or to perform a computation, we add the meaning when we interpret the results. We associate the symbolic output from the computer with concepts we understand but the computer has no comprehension of what those results mean to us.

It is perhaps this laying on of meaning through interpretation that leads to the intuition that you can't get semantics from syntax alone and the belief that computers cannot truly understand. This belief is mistaken in my view.

I would argue that while the computer may not understand that P means that tomorrow is sunny, this does not mean that the symbols on which the computer operates are inherently meaningless. It does at least operationally understand that P is a symbol which may be in state "true" or state "false". This is a much more anaemic interpretation than we would infer, but it is all the computer has available to it. We have not programmed our full understanding of the concepts of "sunny", "day", "warm", "wear", "I", "jumper" etc into the system because we do not need to do so for the system to be useful to us.

In practice, it is pretty much impossible for us to program much of our knowledge and understanding into a computer, because this massive, tangled web of inter-associations is infeasible to capture in electronic form with current technology.

Take language as an example. Each person has an intuitive understanding of the rules of language, and to a superficial extent language seems to follow a grammar and looks like it should be amenable to automated analysis. Even so, the codification of these rules as a formal syntax has so far largely eluded computational linguistics researchers. Despite tremendous advances in the field, there is still no convincing computer system that can reliably interpret arbitrary verbal commands as input, Star Trek notwithstanding.

This is perhaps because of the interconnectedness of everything. In order to perfectly understand language, a computer would need to perform a number of complex operations which come naturally to humans. For example, it would be helpful if it could infer unspecified intentions of the speaker. In order to do this, the computer might need to be capable of some degree of empathy. And in order to do that, it would need to understand human emotions such as feeling cold, happy, warm, bored, angry. And in order to do that, it might also need to understand things about the world that can provoke these states, so it would need to understand rain, entertainment, air conditioning, etc.

And that's just a sample. In order for a computer to fully understand anything, it needs to understand nearly everything. And that's hard to do. So hard to do that it hasn't been done. Yet.

So, to sum up, what we take to be "real semantics" are just the "syntactical semantics" inherent in the unbelievably complex web of symbols represented by the "syntax" of our minds. If we succeeded in creating a computer that was able to operate on these symbols in much the same way that we can, I see no reason to doubt that that system would truly understand just as we do.

57 comments:

  1. Is it possible to get from semantics to syntax, you ask?
    Surely this question is the wrong way round; to refute the Chinese Room you need to get from syntax to semantics, not the reverse.

    I assume from your next paragraph that this was just a slip of the keyboard.


    Your image argument falls down at once; non-abstract visual depictions (pictures!) are not purely syntactic; they have derived intentionality -derived from the intentional states of the mind that created them., That's exactly what distinguishes them from abstract art!. They are called, in artistic terms, representational art, and representation implies semantic content. The hope is that the receiving minds will be able to reverse the process and retrieve that semantic content, for instance by recognising that the transmitting species would be likely to put their own image at the centre, and by linking the shape of the Arecibo dome to their knowledge of physics and electrical engineering. S it looks as if you have produced a small argument against your case.
    As for the rest, I think that you are using terms syntax and semantics in ways that don't resemble those usually used in philosophy of language or linguistics. Of course you can prove almost anything by redefining terms, but that isn't convincing. You are simply confusing the formal property of syntactical operators with the semantic, contentful properties of meaningful contents, in whatever language. Of course the syntactic operators have to have procedural or formal meaning – that'#s definitional – it's part of it means to be syntactical. What you are doing is analogous to refuting “all bachelors are married men” by redefining the word bachelor – it works, in its own little world, but is a useless, invalid move outside that world.

    Next, you seem to assume that when I catch a ball, my mind solves equations unconsciously, brining together inertia, gravity, air resistance to calculate my response. You may be right, but I don't think most neuroscientist agree with you. That's another computationalist prejudice. Rather than solving equations, my nervous system uses experience and extrapolation through repeated trial and improvement to hone a skill in extrapolating paths; no equations involved. As I say, I could be wrong, it's an empirical question. But as far as I know, the balance of evidence and theory supports my interpretation.

    The meaning of semantics is not just that it means something, but that it can be used to make statements about the world, beyond the formal system used to express that meaning. That, too, is definitional.

    Your main argument seems like a really desperate move to sustain the computationalist faith that you assert at the beginning in the face of huge, perhaps insuperable difficulties.

    You use the term rules in some places; again, a dodgy word in this concept, and one which, like your mental unconscious equations in the intro, smuggles in question-begging computationalist assumptions (that the mind works via “rules”), but rules are observer dependent things. As Raymond Tallis shows in the last section ODF “why the mind is not a computer, rules are either a circular question-begging assumption (because the same objections applies as to algorithm, or procedure etc., that they are observer relative, mind-created things in the first place, or they a misconceived false synonyms for “laws of nature”. As he mentioned, Wittgenstein went further and claimed that rules can only exist in societies, because rule-following is an essentially social practice. Rocks don't obey a rule to fall downwards; that's an outdated Aristotelian physics. And there's simply not reason to believe that minds/brains obey rules, or even that rule (except in the everyday, commonplace and socially determined sense of school rules, rules off the game etc). What's more, rules are the sort of thing that we can disobey, or make a mistake about. Not the sort of thing that could underpin minds.

    ReplyDelete
    Replies
    1. Hi HPW,

      Thanks for your feedback.

      "Is it possible to get from semantics to syntax, you ask?
      Surely this question is the wrong way round; to refute the Chinese Room you need to get from syntax to semantics, not the reverse."

      I can't find where I made this mistake. The title of the post is semantics from syntax, which means the same thing as "from syntax to semantics". The word order is less important than which prepositions are used where.

      "Your image argument falls down at once; non-abstract visual depictions (pictures!) are not purely syntactic; they have derived intentionality -derived from the intentional states of the mind that created them."

      But you miss the point. The receiving mind only gets the raw data (syntax) and has to infer the meaning (semantics). How is this achieved? How is it possible to get semantics from structures such as this? How does it even figure out that the data stream is to be interpreted as an image rather than an audio file or a 3D mesh? The Arecibo image is not after all a physical painting.

      You do give a plausible sketch of how it might work, but this sketch is in my view analogous to how I think the Chinese Room works so it actually suits my purposes.

      The formal symbols in the Chinese Room, like The Arecibo message, really do have meaning. Furthermore, like the Arecibo message, the mapping between the formal structure and the meaning is not entirely arbitrary, because the causal and formal relationships between the symbols in the Chinese Room are isomorphic to those of the objects being mapped.

      In any case, the Arecibo message is only an introduction to the problem so it's not particularly important. Besides, I think this image is bordering on the abstract, at least in places.

      "What you are doing is analogous to refuting “all bachelors are married men” by redefining the word bachelor "

      I understand where you're coming from but I do not agree that this is what I am doing. To you, I appear to be equivocating when I use the term semantics to refer to the formal role of a symbol in how it tends to relate to other symbols. My view is that this is not an equivocation, but that what you call semantics is in fact an expression of very complex syntax. You see a fundamental distinction where I see continuity. I think perhaps a new article explaining my view of how syntax and semantics work in human minds would help to clarify this point.

      It's hard to express this idea clearly, but perhaps another example might help. Imagine you're in the Chinese Room and you see the symbols 你好 on a regular basis and note that they are most commonly responded to with 你好. Even if you know nothing of human communication, you at least know that seeing 你好 at least means that the next output will also be 你好. You may also infer that this is some sort of protocol for beginning a conversation because these communications usually occur at the beginning of a series of exchanges.

      (你好 just means Hello).

      Delete


    2. So even in a field as complex as human communication, it is possible to infer some limited meaning from just the roles of the symbols alone. This article argues that this undermines Searle's thesis that you can't get semantics from syntax.

      Of course, if you were to perform this inference, you would do so as an observer of the system and draw a little on your own world knowledge to do so. I agree that it is quite another thing to argue that these symbols actually have meaning to the system itself. However, I do see a similarity in the two problems which perhaps you do not. Mainly, I'm just trying to introduce a little doubt into Searle's contention that syntax and semantics are entirely distinct.

      "I could be wrong, it's an empirical question. But as far as I know, the balance of evidence and theory supports my interpretation."

      I think you just have a narrower interpretation of what solving equations entails. Construed broadly, a ball travelling in a parabolic trajectory is itself solving an equation. I'm not suggesting that a brain has a mental pen and paper and that it solves an equation step by step by performing algebraic operations as an equation would be solved manually. Rather it has circuits, more like a calculator, that implement a lot of complicated trigonometry related functions natively and "automatically". As far as I know, this is compatible with current knowledge -- some savants seem to have tapped into the ability to do this kind of thing consciously. But it's not that important a point. This is just illustrating a difference between how Searle seems to conceive of computationalism and how computationlists themselves conceive of it.

      "The meaning of semantics is not just that it means something, but that it can be used to make statements about the world, beyond the formal system used to express that meaning. That, too, is definitional."

      OK, but I'm not sure how that defeats my argument. In order for a statement to be useful for communicating a meaning, it has to have that meaning in some sense and this is the question I'm exploring

      In any case, the question of semantics from syntax in the context of the Chinese Room and the like is not really about communication between two conscious observers. It is about whether symbols within an information processing system can have meaning to that system itself. This is the idea I am addressing in this article.

      "but rules are observer dependent things"

      Not necessarily. I would characterise physical laws as rules. To me, rules are just regularities. They may or may not be observer dependent. I would indeed say that the law of gravity is a rule. If you disagree then you're just adopting a more specific interpretation of rule. Which is fine, and I'm happy to draw that distinction also if you deem it helpful, but then our disagreement is not particularly profound.

      However, I would suggest that the rules of an algorithm are not optional or social. The computer doesn't decide to obey, much less disobey. When the program is running they're more like the laws of nature. So, you tell me, is the word "rule" inappropriate? What word would you prefer?

      These rules are usually laid out by a programmer, so I guess this programmer may be the observer you require, but the rules of software can continue executing long after the programmer has ceased to be. Furthermore, there doesn't have to be a programmer in the first place. Rules can evolve. This has been done with experiments in genetic programming and, I would argue, in natural evolution. It is possible that the mind is an algorithm and this does not presuppose the mind of a programmer.

      Also, I would admit that my position is predicated on Platonism. On Platonism, algorithms, as mathematical objects, exist mind-independently. If you don't accept Platonism then that's another point where we part ways.

      Delete
    3. a) Intrinsic intentionality. ;” I am now thirsty, really thirsty, because I haven’t had anything to drink all day”. This is an example of a sentence that ascribes a real intentional mental state to oneself.
      b)
      c) C) Ascribed intentionality; In French, “J’ai grand soif means “I am very thirsty.” In this case, Searle writes, “I ascribe intentionality to the French sentence, that is, the French sentence means literally what I say it does. But the intentionality is not intrinsic to that particular sentence just as a syntactical object. That very sequence might have meant something very different of nothing at all. Speakers of French can use it to express their intentionality, but it is not intrinsic intentionality. It is derived from the intrinsic intentionality of the users of the language.”
      d)
      So, I think I can put that together with my description of the way that the aliens might come to understand the Arecibo picture without common syntax or language. The picture is, as you point out, just some data, which can be assembled to look like objects. If none of these objects are familiar enough for the aliens to recognise, the picture will remain meaningless to them, and the communication will fail. However, perhaps they too have built radio dishes, so they may be able imagine what they are in the picture. If so, the intentionality we ascribed to that pattern will be understood, and so the aliens will form the real intrinsic intentional state of thinking “Ahh, that’s a radio dish, that’s how they sent this message”, which is what we hoped to achieve.

      The important point is, that’s how all communication works; spoke and written language and pictures. In the first tow cases, we can communicate much more complex messages, using the combination of symbols and syntax to organise those symbols. Using these means of communication, we can start with the real intrinsic intentionality in the send/speaker’s mind, encode it in the ascribed intentionality of the particular language, and the receiver/hearer can interpret it to reproduce real intrinsic intentional states, that is, conscious awareness of the intentional states (which means very roughly thoughts or knowledge) in the sender's minds. BUT the ONLY works when there is a prior shared understanding of both the syntax and the meaning of symbols in the language, or in the case of pictures, enough resemblance to play that role in the absence of language.

      In the light of this, I can’t see how you can claim that my sketch of the way the Arecibo message could successfully convey meaning is “analogous to how I think the Chinese Room works so it actually suits my purposes.” This explanation is not only complete different from any computationalist/functionalist argument, but it touches on why those arguments fail; both in my explanation of successfully communicating via pictures and in the use of language, the meaning, the semantics of the message are not intrinsic to the message; they depend either on share cultural agreement between conscious minds about what the elements of the message means, or enough shared experience between the minds to allow informed guessing in the pictorial case. This is exactly in line with the Chinese Room, and completely antithetical to the computationalist objections to the CR.

      That took a bit longer than I expected – I’ll try to be briefer in the rest of my reply.

      Delete
  2. “To you, I appear to be equivocating when I use the term semantics to refer to the formal role of a symbol in how it tends to relate to other symbols. “
    No, you are not equivocating. An equivocation means sliding unacknowledged between different meanings of the same word. But syntax and semantics have never had the meaning you are trying to ascribe to them. Of course, anyone can change the meaning of a word whenever they like.

    “'When I use a word,' Humpty Dumpty said, in rather a scornful tone, 'it means just what I choose it to mean — neither more nor less.'
    'The question is,' said Alice, 'whether you can make words mean so many different things.'
    'The question is,' said Humpty Dumpty, 'which is to be master — that's all.'”


    If you change the meaning like that, you end up speaking a private language (Wittgenstein) Trouble is, nobody else is speaking that language, so communication fails.

    “It's hard to express this idea clearly, but perhaps another example might help. Imagine you're in the Chinese Room and you see the symbols 你好 on a regular basis and note that they are most commonly responded to with 你好. Even if you know nothing of human communication, you at least know that seeing 你好 at least means that the next output will also be 你好. You may also infer that this is some sort of protocol for beginning a conversation because these communications usually occur at the beginning of a series of exchanges.”

    Yes, in this case, Searle in the room is noticing regularities in the sequence of inputs and outputs. Of course, these regularities are already specified by the syntax, the books of rules. But all he sees are combinations of symbols that remain absolutely meaningless to him. So this doesn’t get you any further forward; still only syntax, either written down or guessed by Searle. Still no semantics.

    “So even in a field as complex as human communication, it is possible to infer some limited meaning from just the roles of the symbols alone. This article argues that this undermines Searle's thesis that you can't get semantics from syntax.”
    No, sorry, this doesn’t show that we can get any meaning form the syntax or “the role of the symbols, it only asserts it again. Searle did not have any more meaning after noticing that this symbol follows that; he already had that info, in the rule books, and they are purely syntactic in themselves.

    On your point about solving equations: I’m not sure whether you are actually agreeing with me here. If the process that leads me to catch the ball “functions natively and "automatically"” that question is, does it work by following symbolic algorithms, which is what computers and calculators do, or does it do it by means of the balance of electrical/chemical weightings underpinned by the structure of neurons and neurotransmitters which (as far as we can tell at the moments) is make up what Searle calls the intrinsic causal powers of the brain. I think the latter is at least the more parsimonious explanation; no need for a structure of symbol based computation encoding equations over and above that.
    In any case, you seem to beg the question in your original post, by first assuming that the mind “solves equations” in a computationalist sense, then using that as a premise for your argument to prove precisely that point

    ReplyDelete
  3. In response to my comment that “"The meaning of semantics is not just that it means something, but that it can be used to make statements about the world, beyond the formal system used to express that meaning. That, too, is definitional."

    You reply

    OK, but I'm not sure how that defeats my argument. In order for a statement to be useful for communicating a meaning, it has to have that meaning in some sense and this is the question I'm exploring”
    It defeats you argument by pointing out the qualitative difference between syntactic rules and the semantic contents that hey help to convey. Syntax is about how utterances are structured, how the elements of an utterance hang together, are sequenced. In that sense, syntax points inwards to the utterance, pointing out how it’s constructed and helping the receiving mind to interpret it. Semantics, however, points outward, via the socially- determined meaning of the symbols used (words and characters etc.), towards the world outside, and convey meaning about the wider world. Syntax doesn’t do that; it simply organises and marshals the things that do. And that difference if definitional (like “all bachelors are unmarried men), it’s derived from the very meaning of the words and concepts “syntax” and “semantics”. You can’t argue about that, you can only redefine the terms individually and arbitrarily. That is like saying “When I say bachelor, I mean men who weren’t married in church!” You haven’t found out something new about the concept, you’ve changed its meaning by fiat.

    On the question of rules it would be a bit repetitive to go too deeply into this, I think. I mentioned that there are differences between rules and laws of nature, for instance you can break or ignore a rule, or just not apply it because you are unaware of it. It seems to me that you are using “rule” to mean something like “program”; and defined set of instruction that defines of describe how you have to carry out a task. If so, you are just repeating the claim that Searle set out to refute.

    I think that genetic algorithms are a red herring here. They are still the product of a human programmers work, and in fact humans play the role of natural selection; in fact they are a real-life legitimate example of intelligent design, by human, not divine designers.

    I said rules are observer dependent (i.e. the product of social processes between people in societies, not of brute physics) because I’m taking rule to be roughly the same as algorithm or program, for our purposes. There are two types of things in the world; observer independent ones that are intrinsic to physics, like rocks, planets, stars, atoms; and observer relative things like political positions, money and languages. The first type would still be here if no conscious being existed or have even existed. The second are real but only exist because humans agree about the, collectively, and ascribe their meaning to them. And, as Searle shows, computing falls into the latter group. If you don’t accept this, I’m happy to argue it further.

    Now we’ve moved beyond the Chinese Room, which shows that syntax is not enough for semantics. Searle went on to argue that syntax is not intrinsic to physics, along the lines I’ve outlined, amongst other telling points (I think) against computationalism.

    I find that a lot of people have only read the CR, and nothing else that Searle’s written. In fact, quite a few people who comment adversely online have clearly only read summaries of Searle’s CR argument and hostile summaries at that. Still, I suppose that happens to many philosophers. Anyway, I really suggest that anyone interested take a look at his paper “Critique of Cognitive Reason” for a good summary of some of his arguments about Ai and computational theory of mind. It’s only 15 pages long and available at http://www.federaljack.com/ebooks/Consciousness%20Books%20Collection/John%20R.%20Searle%20-%20The%20Critique%20of%20Cognitive%20Reason.pdf

    ReplyDelete
  4. Hi Inti,

    The Arecibo message contains not only pictures of objects that may look familiar to aliens. It also contains representations of information or concepts (e.g. numbers, elements, chemical formulae). These representations are not visual so much as structural, so part of the job of interpreting unfamiliar representations is in identifying structural similarities between representations of concepts and concepts themselves (also illustrated by my example of interpreting "hello" in The Chinese Room). This ties in with my thesis that semantics is partly achieved by such structural or syntactic isomorphisms. I think that what gives mental representations their intentionality is structural similarity between the mental representations and the concepts so represented (e.g. my concept of a car is associated with my concept of wheels; a real car is associated with four real wheels) as well as causal links (e.g. I see a real car and my mental concept of a car is activated, or my mental concept of a car is activated in a certain way and I decide to go for a drive).

    I agree with your sketch of intrinsic and ascribed intentionality. In fact it agrees so well with my thoughts on this area that it is a great pleasure to read someone else capture my thoughts so well.

    "the meaning, the semantics of the message are not intrinsic to the message; they depend either on share cultural agreement between conscious minds about what the elements of the message means, or enough shared experience between the minds to allow informed guessing in the pictorial case"

    Absolutely! I have been in arguments with philosophers before making exactly this point. The thing is, I think The Chinese Room is one of these conscious minds.

    The problem is that Searle is assuming that failure to ascribe intentionality means there is no intrinsic intentionality. At best, it's an appeal to an intuition I simply do not share. I think that intrinsic intentionality is just how a mental representation feels to the host mind. In The Chinese Room, the mind is not Searle's but that of the system, and to that mind I believe the shuffling of symbols will be intrinsically meaningful due to structural similarities with and causal relationships to objects in the real world and concepts shared with other minds -- the same kind of features that give biological mental representations their intentionality.

    I disagree that I am speaking a private language. I am using syntax and semantics in the same sense used by Searle when he insists that you can't get semantics from syntax. In Searle's usage, syntax is not "how utterances are structured". He is using the term in a broader sense to describe all formal systems such as the representations in The Chinese Room. These are not utterences but states of an algorithm with causal relationships to other states. Nor is semantics "the socially-determined meaning of the symbols used". What he means by semantics are real intentional states -- real understanding, beliefs, desires etc and not merely representations of same.

    Of course your definitions of syntax and semantics are perfectly sound and related to Searle's usage, but your interpretation is more applicable to philosophy of language than to philosophy of mind. If you object to my usage here, your beef is with Searle and not with me. What I am discussing is how intentionality can arise from mechanism or computation. Semantics from syntax is just a snappier way of saying this (which is presumably why Searle phrased it so).

    ReplyDelete
    Replies
    1. "Of course, these regularities are already specified by the syntax, the books of rules."

      No they aren't! This implies a very naive interpretation of what the book of rules would need to be like. If the mind is supposed to behave analogously to a human brain, there will be no simple high-level rules. In fact, the algorithm in the rules will amount to a program which simulates a human brain, neurons, neurotransmitters and all, with inputs being converted into optical or auditory signals before processing. Such a simulation contains no simple rules of the form "Respond to 'Hello' with 'Hello'". If there are such regularities, they will only come about through many layers of emergent, complex, unpredictable interactions.

      To restate the point, Searle can start knowing nothing about the symbols, but just from observing how they are used he can begin to glean some very simple ideas about their functional roles. My thesis is that semantics is simply a very complex suite of functional roles. The examples I have offered (the structural representations in Arecibo and the functional role of "Hello") are of course trivial compared to the complexity of what the brain can represent and process, but to me at least they help to visualise how this can be.

      "does it work by following symbolic algorithms, which is what computers and calculators do, or does it do it by means of the balance of electrical/chemical weightings underpinned by the structure of neurons and neurotransmitters"

      This is a dichotomy I do not recognise. If the universe appears to be governed (at least approximately) by computable mathematical rules, then every process in the universe can be simulated on a computer, including chemical processes. Again, it seems to me you have a very naive idea of what a brain algorithm would look like. Algorithms do not have to be neat and orderly with sensible representations and rules. They can be messy, dynamic, complex, chaotic. I don't know what is required at a minimum to achieve concsiousness in a computer, but for the sake of argument we can imagine that it takes the simulation of neurons and neurotransmitters. The actual processing of meaningful mental concepts may occur at a great remove, many levels of description above the actual code itself. This is why one couldn't tell from looking at the code that "Hello" begins a conversation.

      Delete

    2. "In any case, you seem to beg the question in your original post, by first assuming that the mind 'solves equations' in a computationalist sense, then using that as a premise for your argument to prove precisely that point"

      No, I'm not using it to prove my point. I'm using it as an example to show how Searle's conception of algorithmic mentality is naive. There is much that the brain does that is automatic or unconscious. Searle seems to think that just because he is not aware of this physical, mechanistic information processing means that it is not happening at all, or perhaps that the effortless intuitive ease with which we do many routine mental tasks cannot be reconciled with the laborious information processing it seems to be when we understand it in detail. The process of human deliberation does not at all feel like implementing an algorithm, but that does not mean that it does not supervene on an algorithm of some kind.

      I disagree that genetic algorithms are a red herring. For genetic algorithms, all you need to specify is some problem and then the system, over time, will evolve a solution to that problem using the same kind of process as natural selection. Often the solutions to the problems will appear ingenious and surprising to the designers of the system. This is quite unlike manually designing a solution. Sure, they still had to set up the system, but consider for a moment the hypothesis that this universe has a creator. If this were true, would that suddenly mean that we are philosophical zombies or that our minds only have the intentionality put there by the designer? I don't think so. The creator might not even have foreseen that life would evolve at all. I see no difference in principle between this hypothetical situation and that of genetic algorithms.

      I see where you're going now with rules being observer relative. I am familiar with Searle's argument in this regard. Computation, according to Searle, is in the eye of the observer. Pretty much any physical system can be interpreted as computing something, and if your interpretation is byzantine enough you can force nearly any interpretation. It follows, according to Searle, that computation doesn't actually exist without some observer to interpret it so.

      I am a mathematical Platonist. I think all mathematical structures exist independently of mathematicians. I think the human mind is such a structure. I think the computations I'm talking about exist independently of any observer and indeed independently of any physical instantiation -- they are not really located in time or space. If Searle says that computationalism leads to the absurd conclusion that all possible minds exist, then I agree with him with the caveat that I don't find it absurd. The other (in my view) tenable computationalist position is the materialist one, denying that minds really exist at all -- that consciousness is an illusion and all there is is the brain. I think the difference between the two views is relatively trivial -- we just define the term 'existence' differently. My Platonism is really just a preference to use the language of existence to describe mathematical objects such as minds.

      I have not yet read Searle's paper but I have listened to many of his lectures. It's just a format I prefer. I feel I am familiar enough with his arguments to comment, although I will read his paper if you insist that it will clarify some misunderstandings on my part (though I doubt that very much).

      Delete
  5. Since you use the car example, I’ll repeat a quote from Searle (Critique of Cognitive reason) that I posted elsewhere – it seems relevant.

    “To make this difference clear, let us go through an example. Suppose I see a car coming toward me. A standard computational model of vision will take in information about the visual array on my retina and eventually print out the sentence, "There is a car coming toward me." But that is not what happens in the actual biology. In the biology a concrete and specific series of electrochemical reactions are set up by the assault of the photons on the photo receptor cells of my retina, and this entire process eventually results in a concrete visual experience.

    The biological reality is not that of a bunch of words or symbols being produced by the visual system; rather, it is a matter of a concrete specific conscious visual event- this very visual experience. That concrete visual event is as specific and as concrete as a hurricane or the digestion of a meal. We can, with the computer, make an information processing model of that event or of its production , as we can make an information processing model of the weather, digestion , or any other phenomenon ,
    but the phenomena themselves are not thereby information processing systems.

    In short, the sense of information processing that is used in cognitive science is at much too high a level of abstraction to capture the concrete biological reality of intrinsic intentionality “
    You reply to another of my points:
    “ "the meaning, the semantics of the message are not intrinsic to the message; they depend either on share cultural agreement between conscious minds about what the elements of the message means, or enough shared experience between the minds to allow informed guessing in the pictorial case"

    Absolutely! I have been in arguments with philosophers before making exactly this point. The thing is, I think The Chinese Room is one of these conscious minds.
    The problem is that Searle is assuming that failure to ascribe intentionality means there is no intrinsic intentionality. At best, it's an appeal to an intuition I simply do not share.”

    No, he is saying that there has to be a conscious, intentional mind already in order to ascribe intentionality. And that a digital data processing system can’t possibly do that, because it is entirely composed of elements that only have ascribed intentionality. So there must already be a conscious mind present in order to ascribe meaning to the purely symbolic elements. That is not an intuition at all, it’s logically entailed. Of course, you think that the System Reply gets you out of it, so let’s look at that.:
    “I believe the shuffling of symbols will be intrinsically meaningful due to structural similarities with and causal relationships to objects in the real world and concepts shared with other minds -- the same kind of features that give biological mental representations their intentionality.”
    There are two features you use to explain how conscious meaning arises; (a) “Structural similarities with (objects in the real world and concepts shared with other minds)” and (b) “causal relationships to objects in the real world and concepts shared with other minds” I’m not sure if you meant both “objects in the real world” and “concepts shared with other minds” to apply to both explanatory elements? I’ve assumed so.

    ReplyDelete
    Replies
    1. Hi Inti,

      I have answered that car analogy point elsewhere. Indeed, my original car analogy shows how this Searle's fails -- cognitive representations are not sentences, as in Searle's analogy, but rich, dynamic substructures of a semantic web, as illustrated in my simple sketch of how the car concept is associated to the wheel concept.

      On the problem of ascribed intentionality... information processing systems do not have to have ascribed intentionality. They can learn and evolve much as humans do and can develop their own representations for external objects. There doesn't have to be a mind deciding which representations to use for which real objects.

      Besides, you're very much begging the question when you're assuming that information processing systems are entirely composed of elements that only have ascribed intentionality. The computationalist thesis is that intentionality is nothing more than a network of causal associations. By reducing it to its basic elements, you are ignoring these associations and therefore missing the forest for the trees. Individual neurons have no intentionality either. If there has to be a conscious, intentional mind in order to ascribe intentionality, where do brain structures get their intentionality? It's because the brain, as a whole, comprises a conscious, intentional mind, right? So that's just the point I'm making for an AI system. By making the point that there cannot be intentionality since there is no mind you are begging the question by assuming there is no mind.

      Delete
  6. (I suspect we are both riding roughshod over some complex debates in philosophy of mind and language, but let’s do our best.)
    Lets start with (a).I can’t imaging a naturalistic philosophy of language that doesn’t assume that there has to be some relationship between concepts (however they are made up) and objects in the outside world. But it’s not clear to me what exactly “structural similarities” means. What are the structures concerned? Biological structures? Algorithms? Linguistic structures? I feel in need of more detail in order to understand what you mean. I presume you don’t mean to suggest any naïve coresp[onedce of form, and you are not supporting a position that would entail that everyone who believes, say “Paris is the capital of France” has the exactly the same configuration in their brains encoding that?

    Now in the case of (b)
    I have some of the same difficulty, but with an added complication. What is a “causal relationship”? Here’s a source of possible confusion through equivocation. When Searle talks about the “causal powers” of the brain, he means actual physical processes, comparable with those involved in digestion, for example. In this sense, causal is counterposed to a symbol-manipulation system, such as a computer. Obviously, there are physical causal processes at work when a computer runs a program, but the whole point of the computationalist argument is that these are irrelevant; it’s the form of the program that matters, and it’s multiply realisable on any hardware; electronics, cogwheels, mice and cats and cheese.
    So you can’t mean by causal what Searle means, or you’re conceding his point entirely. So what can you mean? That something going on in my brain causes me to act in certain ways? Trivially true, for non-Cartesians. That purely computational structures can have the power to cause consciousness? That’s exactly what’s at issue, so if that is what you mean, you’re just reasserting what you are setting out to prove. But I think it becomes clearer in a moment.

    “I disagree that I am speaking a private language. I am using syntax and semantics in the same sense used by Searle when he insists that you can't get semantics from syntax.

    If so, I submit that you’ve shifted your meaning from your starting position. There, you claimed that syntactical elements have “some limited meaning” and therefore that we can move from syntax to semantics. I think I showed that this is wrong; that the only meaning that syntax has refers to how elements of an utterance are organised (inward looking) while semantics, to be worthy of the name, must be outward looking. Whether I’ve succeeded in showing that or not, the point is that your use of “syntax” isn’t just “counterintuitive” – it describes a different concept from that usually implied by the word. Hence, private language. If you don’t see that, I will just have to leave it to any other readers of this discussion to judge.

    ReplyDelete
    Replies
    1. In regards to (a), I mean logical structure. I picture mental representation as a semantic web, formed largely of nodes wired up to other nodes with various kinds of links. The semantic web represented in a person's brain will be similar in logical structure to a concept map of that person's knowledge.

      http://en.wikipedia.org/wiki/Concept_map

      The disconnect between ordinary concept maps and actual meaning is in my view largely a result of their sparsity. In this view, to have a fully accurate concept map of an actual person would be to have a dynamic, active, firing concept map with embedded qualia and sensations and processing functions, with the ability to grow new nodes and form new links, such that it would essentially be a complete and conscious reproduction of a mind.

      I am not at all saying that two people would have the same representations of 'Paris is the capital of France', or indeed of any concepts. Each of these words will have different associations and memories in the mind of each person, and the representation of a single sentence cannot easily be separated out from the rest of a semantic web. It only has meaning in context as part of that web, and each person has a different web because each person has different attitudes and aptitudes, experience and knowledge.

      As for (b), what I mean by causal relationship is something about how mental representations are linked to objects in the world and concepts in other minds. If I see a car, there is a physical chain of cause and effect, beginning at my retina, passing through my optic nerve, into my visual cortext, etc, until eventually the neurons which correspond to nodes in my semantic web which are associated with the concept of 'car' are activated. Causality also works the other way. Part of the decision-making process involved in deciding to go for a drive will involve activating those same neurons. So the semantic link between these neurons and the concept of car does not have to be ascribed and it is not arbitrary, it's there at a physical, causal level in a way that can be reproduced in computers.

      On my usage of the word syntax, you believe you have shown how my usage of the term is a private language. However you have not really answered my point that I am using it in the same sense Searle is using it, and if not then Searle's allegation that you can't get semantics from syntax may not be wrong but is completely irrelevant to computationalism. Neither of us are discussing how elements of utterances are organised, we are describing how mental or computational representations are organised, and these are not utterances. They can be many-dimensional and they can be dynamic. They can perform information processing tasks in a way that words in a sentence cannot. As such they form structures much more complex than a simple linear sequence of symbols. These structures can be structurally analogous to and causally related to other objects and so it is not hard (for some at least) to see how they can have intrinsic meaning, referring to these objects.

      Delete
    2. It’s striking that each of us thinks the other is begging the question. Perhaps we are at least clarifying some points of disagreement though.

      Clearly each of us has conflicting and deeply imbedded assumptions. This is evident when you say that your semantic networks “have a dynamic, active, firing concept map with embedded qualia and sensations and processing functions” To me, you are simply restating the computationalist thesis, including its critical failure to understand that all computational representations must be observer relative, no matter how complex and many layered they are. I believe that you can’t have embedded qualia in a representation, no matter how complex or layered it is, because all representations are symbolic statements TO a conscious representational mind. We have a clash of presuppositions, it seems.

      I know you are trying to show me that your concept of computationalist mind is much more elaborate than the way Searle describes them. But in principle, and in the way they work, they are the same as the ones that Searle’s addresses in this vital respect..

      Searle Is simplifying to bring the underlying principles to the foreground, not because he believes that the kind of structure computationalists propose is really simple.

      “If I see a car, there is a physical chain of cause and effect, beginning at my retina, passing through my optic nerve, into my visual cortext, etc, until eventually the neurons which correspond to nodes in my semantic web which are associated with the concept of 'car' are activated. Causality also works the other way. Part of the decision-making process involved in deciding to go for a drive will involve activating those same neurons.”

      Great, I agree with almost all of that description, but there is no necessary computational element in your description; it’s a biological, neurological process, through and through. I know you see the semantic web as as necessarily a computational element, but why assume that? Why not take it as simply a description of a biological process? I realise that you can say that computational processes underlie your description at a lower level, but I can’t see why they are necessary. That’s what I mean when I say computationalism is overdetermined as a description. (And to repeat, anything can be described in terms of computation; the question is, does it have to be?)

      Delete
    3. “So the semantic link between these neurons and the concept of car does not have to be ascribed and it is not arbitrary, it's there at a physical, causal level in a way that can be reproduced in computers.”

      To me, these two phrases are contradictory; ‘at a physical, causal level’ and ‘in a way that can be reproduced in computers’. The point is that nothing that is physically causal can ever be reproduced in computers, by any program no matter how sophisticated. Why? Because computing and computational objects only describe, they do not reproduce physical effects. I realise that here must be a physical substrate to any computation, but that is irrelevant to the nature of the computation. That’s why multiple realisability is so important; it was a starting point of computationalism, but it comes back to bite it. To recap, computation can be based on cogwheels or transistors, water pipes and so on. It is the representational structure that makes it computational, and representation is never a physical phenomenon. And to answer your question as to why the explanation via physical causal processes in the brain don’t share the same difficulties as computationalism, it’s because causal process of the brain are observer independent, while computation is observer relative. The physical causal processes are amongst the brute facts about how the physical world works, while computational process, as computation, are not. To put it very crudely, causal brain processes actually push matter about, computation doesn’t, ever.
      That’s why this sentence of yours makes no sense for me: “The computationalist thesis is that intentionality is nothing more than a network of causal associations.” Associations, in your sense, are representations, (even when cast in complex computational networks) and representations cannot be causal. Only physical, observer-independent processes can be causal. At best, representations can describe or depict processes.

      I’m reminded of the mistaken “luminous room” refutations by Pat and Paul Churchland. They claimed that Searle is like someone who denies that light is electromagnetic radiation because waving a bar magnet doesn’t produce light. But that’s a false analogy, and Searle easily defeats it. The account of light as electromagnetic radiation is a physical on right down to the ground. But the analogy with formal symbols fails because formal symbols possess no physical, causal powers. The only power that symbols have, qua symbols, is the power to cause the next step in the program when the machine is running (says Searle, and I think he gives too much away there; I’d say they don’t even have that power- it’s the underlying machine substrate that has even that power, and the program instruction only denotes that. Any error in this is all mine, not Searle’s, though). The relevant properties of 0’s and 1’s are abstract computational properties. And the Churchlands, like you, claimed that Searle was looking at the wrong layer, or at the wrong scale, and they too missed the point that it is a qualitative difference, not one of scale or complexity, all the way through.

      Delete
    4. On genetic algorithms.
      By coincidence, I’d been thinking about genetic algorithms and evolved computation before we started this discussion. I asked myself what empirical finding could disprove Searle’s point. As I’ve mentioned, I think the genetic algorithms we have produced are far less independent of human design than you do. Setting that aside though, I had imagined a little science fiction story.

      Suppose we encounter an alien species whose brains clearly function very differently from ours. They seem to be conscious, at least, they behave like it (they pass the Turing test perfectly, face to face) and they tell us that they are conscious. The more we learn about those alien brains, the more we see that they reproduce the function of what we commonly call a computer. There are direct analogues of the circuit elements, and messages pass and are controlled in ways which look very like those in our microcircuits. Does this prove the computationalist case, once and for all? Not yet; we also need to know how this state of affairs came about. If on the one hand there is a clear fossil record or equivalent evidence showing that these brains came about by the process of natural selection, that would make it much more likely that Searle is wrong, and computers really can be intentional, conscious beings. If on the other hand historical evidence showed that the aliens were the product of the kind of process some transhumanists advocate, of uploading ourselves into computational devices, that would not be any evidence against Searle. It might be the aftermath of a terrible tragedy, one that could face us if we were mislead down that path. We would have turned ourselves into philosophical zombies, which have all the external behaviour of conscious beings, but without consciousness; and there would be no possible empirical way to tell whether this was true just b examining them as individuals, as artificial phenotypes. Only the history of how they came about would provide evidence.

      I see that you think that philosophical zombies are logically impossible, assuming your strong version of computationalism. I think your are write, given your premise, but your premise is wrong.

      You mentioned Boltzmann brains; oddly I’d written a section about Boltzmann computers, naturally occurring computational structures that arose just like Boltzmann brains. However, I cut it out because I didn’t think it added much to the discussion.

      I’ll leave the definition of syntax for now. I can’t see how you are using the word in the same way as Searle, at all.

      I’ll have to respond to your other points later. Thanks for coming back to me. I’ve found this discussion very stimulating.

      Delete
  7. You say
    “In Searle's usage, syntax is not "how utterances are structured". He is using the term in a broader sense to describe all formal systems such as the representations in The Chinese Room. In Searle's usage, syntax is not "how utterances are structured".”
    First, you seem to contradict this when you say “Of course your definitions of syntax and semantics are perfectly sound and related to Searle's usage, but your interpretation is more applicable to philosophy of language than to philosophy of mind. If you object to my usage here, your beef is with Searle and not with me. “By the way, Searle has written, rightly I think, that philosophy of language is a subset of philosophy of mind. That’s true because without mind, there is no meaning, just empty symbols.

    “In Searle's usage, syntax is not "how utterances are structured". He is using the term in a broader sense to describe all formal systems such as the representations in The Chinese Room.”
    There really isn’t a difference there; I meant the term utterance to mean any sequence of symbols, encoded in any medium, and organised according to formal rules.
    “These are not utterances but states of an algorithm with causal relationships to other states.”
    Ah, but a state of an algorithm has not causal relationship to other states. It is a symbolic and abstracted description of one of a series of physical states in the machine that implements it. It’s the physical process of that machine that possesses ALL of the causal powers. The algorithm is just a symbolic description of the series of states that we set out to achieve in the machine. Indeed, if you don’t accept that you are contradicting a central proposition of the computational theory of mind; that the underlying hardware is completely irrelevant, and that mind is a program that can run on any hardware. The algorithm is like the marks on a clock face – it doesn’t drive the machine forward, it just points out, in an observer relative, way, dependent on the existence of mind outside itself for meaning. But it’s the hardware that does it. Remember that some early computers didn’t even possess a programming language, even at the level of machine code. They were programmed by plugging in the leads into sockets something like an old fashion telephone exchange. This shows how the programs are just the notes we write for yourself about how to “rewire” the machine, in a more convenient form, certainly, but they are still just another form of human language, just “utterance” in that sense. And as such, they can no more have “causal powers in themselves than a sentence that I utter.
    And of course we talk about words having power, we say the pen is mightier than the sword and so on. But when you trace carefully what’s happened, the words are just more carriers of ascribed intentionality form one (set of) brains to another. It’s the brains, the minds that have the causal powers.
    “What I am discussing is how intentionality can arise from mechanism or computation.”
    Of course it can! Searle says loudly and repeatedly that your statement here is true – but it isn’t the point at issue. By what type of mechanism can do it? I’m sure we would agree that the brain is a mechanism, it gives rise to intentionality. QED. And you mention later the point that “computation” doesn’t name something in the physical world, but any reasonably complex phenomenon can be described as computational.
    That’s not the question – we are discussing whether syntax, in itself, without semantics, can give us minds. I’d say that you try to show that you could move directly from syntax to semantics (a pretty desperate move, I’d say) you tacitly acknowledge the importance of this point.
    “No they aren't! This implies a very naive interpretation of what the book of rules would need to be like…
    …are of course trivial compared to the complexity of what the brain can represent and process, but to me at least they help to visualise how this can be.”

    ReplyDelete
    Replies
    1. I don't see a contradiction in what I said. I note that there is a similarity between your definition of syntax and semantics and that which Searle appears to be using, but I also note that there seems to be a difference, because no computationalist thinks of mental representations as sequences of symbols. As mentioned in my last comment, they are many-dimensional and dynamic. There seems to be a deep-seated confusion here between the idea of computer programs as sequences of formal statements and computer programs as running processes with causal power. Computer code itself, with its own syntax and ascribed semantics, is only a representation of an information processing algorithm and should not be confused with it.

      This confusion is again evident in your account of the causal powers and the roles of the physical system and algorithm. By insisting that it is not the algorithm that has causal powers but the physical system, you seem to me to be saying "It is not the elephant's mass that crushed the mouse, it is the elephant. The elephant's mass is just an observer relative number that could be described as kilograms or pounds". In other words, you are confusing the algorithm itself from the code used to represent the algorithm in some particular computer language.

      The algorithm is most certainly not just a symbolic description of a series of states of the computer process. The algorithm is more like a structural property of the process. The algorithm is a pattern instantiated by the process, and any process that has that pattern will (in my view) have the same intentionality, just as any massive object can crush a mouse. The algorithm is therefore as crucial to consciousness as mass is to crushing.

      Computer code may be a way of conveniently capturing codes about how to rewire a computer, but computer code is not an algorithm. Even a microchip implements algorithms. Here, there need be no code, no software, no symbols at all, only raw electronics doing what physics dictates. But there is still an algorithm, because an algorithm is not code but a procedure for doing something. Euclid's algorithm existed before any computer language, and it is the same algorithm whether expressed in ancient greek, English, Java or C++. This is why talk of utterances is so misguided.

      Delete
  8. I do realise that a real “brain program would have to be very complex, as I’m sure Searle is. But I’ve never understood that point of the many “too slow, too simple” objections to Searle. However many layers you add, it seems to me that the fundamental conceptual issues remain the same.

    I’m still a bit puzzled about how the point that Searle might infer some simple rules moves your argument forward. If I grant it for the sake of argument, so what? He can consciously infer a few relationships from observing regularities. How does that affect the argument? Searle and both know both people and programs can do this. Remember, he first came up with the Chinese room in response to Schrank et al’s system that could answer questions about stories, inferring points that were not explicitly stated. Another example is theorem proving software. That was exactly the point of the Chinese Room; that to be able to that kind of inference is no guarantee of intentionality or consciousness.
    “Searle seems to think that just because he is not aware of this physical, mechanistic information processing means that it is not happening at all, or perhaps that the effortless intuitive ease with which we do many routine mental tasks cannot be reconciled with the laborious information processing it seems to be when we understand it in detail.”
    No, that simply isn’t what he says. He understands very well that, however the brain makes mind happen, there must be processes involved which are not accessible to the person’s consciousness. He simply sets out to show that that process can’t be computational, in the commonplace sense, for the reasons I’ve mentioned before. That plays no part in his argument. In a sense, computationalists are like the drunk in the joke, looking for his lost keys over here under the street light, even though he dropped them over there; “because the light’s better over here”. The computational theory of mind seemed like a marvellous breakthrough, because it offers a naturalistic solution to the mind-body problem. But not only is it a false friend, leading to a degenerating research program in Lakotos’ sense, but it isn’t even necessary.

    You often hear computationalists argue that only way we can solve the problem of other minds is by examining the behaviour of other people, or even animals. Searle say no, as well as behaviour, we know quite a lot about how their nervous systems work, and how they evolved, which give him as reasons as good as we are likely to get to suspect that neighbour or even his dog is also conscious work, What’s more computationalist explanations are superfluous.
    If a full, naturalistic description of how consciousness arises becomes available, through neuroscience and related disciplines, then a computational description over the top of that is unnecessary and overdetermined. (And if such a naturalistic biological description is never available, we won’t be able to use what we learn from that to validate the claims to consciousness of any supposedly conscious AI.)

    ReplyDelete
    Replies
    1. Searle acknowledges that an AI algorithm would be very complex, but then he goes and uses unrealistically simplistic examples to supposedly show how computers are not like people -- but in my view the differences he illustrates are usually an artefact of the simplicity. They are driving intuitions in the wrong direction. It's like a creationist arguing against evolution by saying that his grandfather was no more a chimpanzee than he is. It demonstrates a failure to imagine how an intuitive understanding of simple, familiar cases may not extrapolate to very extreme ones. And this is the problem, because Searle never manages to demonstrate any fundamental objections to computationalism: ultimately all his arguments bottom out at the fact that computer consciousness is unintuitive.

      The point about simple inferences being possible from purely symbolic processing was to establish the point that there is no insuperable gulf between symbolic processing and meaning. If simple inferences are possible, then sophisticated inferences may be possible, and if this is the case then it seems to me that meaning is possible, and this is perhaps most clearly the case where there are causal links to objects to the external world. Even in cases where the objects are abstract, as in theorem proving software, I would not agree that there is no intentionality. Indeed I would think it reasonable to describe theorem-proving software as being intentional, so it cannot be used as an uncontroversially self-evident example of an unintentional computer system.

      As with simplicity, though Searle acknowledges that there are processes which are not accessible to consciousness, he often seems to forget this when illustrating his intuitions. It doesn't feel to him like he is thinking algorithmically, but this doesn't at all mean that algorithms are not ultimately responsible for his thoughts.

      For me, the appeal of computationalism is not that it offers a naturalistic solution to the mind-body problem, but that it simply has to be correct. Given how I perceive the issues, it is simply unthinkable that it is wrong. Searle's biological naturalism is incoherent in my view.

      Delete
  9. So why is has computationalism been so popular? You mention materialism as another tenable computational position (of course computationalism is only the most popular subset of materialist positions). Well, computationalism seems to solve problems that materialism poses for itself, within a pleasingly materialistic framework.
    In the first two chapters of The Rediscovery of the Mind, Searle shows why materialism (in this sense) isn’t tenable. He calls the second chapter “The recent history of materialism: the same mistake over and over”. . (I must emphasis that Searle is using the term materialism in a strict and limited sense, not as a synonym for naturalism, anti-dualism etc, as it’s sometimes used. I can tell you know that, DM, but sometimes it’s misunderstood.).
    He sets out the propositions which make the materialist tradition. I won’t try to go over all of these mistakes here, but the critical ones are “The only things that exist are ultimately physical, as the physical is traditionally conceived, that is, as opposed to the mental” and (my paraphrase for brevity) reality is objective and so we must study the mind only through objective or third person points of view.
    This is wrong because not all of reality is objective; some of it is subjective. There’s a common confusion between epistemic objectivity, which scientists amongst others rightly seek, avoiding the use personal prejudices, emotional a biases, opinions and limited perspective, for example, and the ontological subjectivity of things which actually exist but are subjective by their nature. The prime example of this is consciousness; the others derive from consciousness through culture and ascribed intentionality; for instance, money is subjective when it comes down to it. And the mental is not opposed to the physical, it is part of it.
    So not only are we entitled to take into account the irreducibly first person nature of consciousness, if we don’t, we will certainly go astray. And that’s why Searle emphasises that we have to consider the question of consciousness in a way that takes account of its first-person nature, rather than trying to explain it away as materialist explanations do. It’s not that he appeals to intuitions, as so many claim; it’s this hard-headed argument.

    I’ll have to think a bit more about genetic algorithms. Fromm what I know about them, they are not as “hands-off” or independent of human intervention as you suggest, but I’ll have to leave that for another time.
    I’m sorry, I don’t feel I can take on your mathematical Platonism. I don’t know what I think about that view, but I’m pretty sure that you are stretching beyond the breaking point here. I remember Massimo Pigliucci saying to you in the Scientia Salon comments “You do realise that are practically an idealist here, don’t you?” and I suspect this was what he was talking about. Still, maybe you are happy with that. Anyway, it seems too left-field to be useful here, I’m afraid.

    ReplyDelete
    Replies
    1. Well, I'm not a materialist, so at least we can agree on that.

      Genetic algorithms can be as hands off as you like. We don't need to find examples (although they do exist). We can use thought experiments which while practically infeasible make the point of principle crystal clear. We can for example imagine a physical simulation of a whole universe in which beings such as us happen to evolve. The only intentionality ascribed to such a system is the symbols used to represent quarks, photons, etc, and yet beings with mental representations just as sophisticated (and just as apparently intentional) as our own can evolve. They can represent concepts such as love, loyalty, mathematical abstractions and so on, as well as referring to other objects within the simulation, objects that were not conceived of by the programmers but that emerged as the physical simulation developed.

      With regard to Platonism, I'm not sure where I'm stretching beyond the breaking point. What is it I'm stretching? The definition of Platonism? Computationalism? What?

      Delete
  10. I know what you mean about preferring lectures; I first heard about the Chinese room on a program on BBC Radio 3, when I was just a simple naive computationalist myself ;-) Later I listened to Searle’s lectures on Philosophy of Mind from The Teaching Company. There are his Reith lectures (quite old now, but a decent abbreviated version, and free on the BBC site). But I find that his books and papers provide a lot more detail, and responses to many objections that won’t fit in a lecture format. I wasn’t nagging you to read them, but I mentioned them for anyone reading this, who might not know much more than cut down versions of the Chinese Room arguments. I would certainly recommend at least reading Critique of Cognitive Reason and Why I am Not a Property Dualist, and if possible the book The Rediscovery of the Mind; perhaps also The Mystery of Consciousness, though some of the contents of that is available for free online, for instance the debate with Dennett is on the New York Review of Books site, along with Searle vs. the Churchlands.
    By the way, the Searle position doesn’t precude there being some processes in the brain that could be best described in computational terms. For example, he did write somewhere, I forget where, about the imaginary case where the function of parts of a persons’ neurons, say the dendrites, start to fail, and are replaced by a computational device within the neuron. This case wouldn’t be invalidated by his arguments, and he believes that the person would remain conscious, nor would it undermine his position, assuming that it is the interaction of the network of neurons that gives rise to consciousness. The “computerised” element would be wholly contained within the black box of the neoron, and to the rest of the system, the inputs and outputs would be the same, so overall, the conscious system would remain non-computational. (Of course, we know too little about the brain to be dogmatic about the precise interactions that lead to consciousness)
    So my position would not preclude some computational elements, It only denies that computation is sufficient for consciousness, not that it plays no part at all.

    ReplyDelete
    Replies
    1. I've listened to the Reith lectures and many others, and I'm always aggravated. I would love to be able to have a discussion with him. Not that I would get anywhere, but the points he makes are often so obviously wrong (from my perspective) that's hard to endure him with equanimity when I don't have a chance to answer him back.

      Which is why it's nice to have such a chat with a Searle apologist!

      On the neurons and the black box, what if instead of replacing one neuron we replace two neighbouring neurons, keeping the external interface the same and accurately simulating the connections between the two?

      What if we replace three? Or 400, or four million?

      At what point do we lose concsciousness? Would we notice? It would seem that we would not, for if all inputs and outputs are kept the same, all external behaviour must be kept the same time the whole time. If we could imagine some Star Trek-era teleporter technology gradually teleporting neurons out and teleporting ever-larger black boxes in, it would seem that you could not possibly notice anything was happening because if you did you would be able to report it, which would constitute a change in outputs. Far from bolstering his case, considering such thought experiments only shows how it makes no sense.

      Delete
    2. The case I’ve set out wouldn’t change even if every neuron was altered internally, in any way you like, because each one remains a black box, and how it functions internal makes no difference to the function of the brain at an inter-neuronal level, provided that all the inputs and outputs remain the same. This is assuming for the sake of argument that the processes that lead to consciousness operate at the level of interaction between assemblages of neurons, and/or of larger assemblies made up of neurons. So it isn’t a process where changes within the neurons have any additive or multiplying effect.After all, how could the rest of the brain, outside the neurons, even “know” that anything had changed within?

      This is really not a defense of Seale’s arguments as such. It simply shows how there could be strongly computational elements (elements where computation is unambiguously the best description,) within parts of the brain, without the system as a whole being computational at all. Off the cuff, here’s an anology. When I go into a shop, a computational device runs the automatic door opener. That doesn’t make the shop in any sense a computational device, because the computation is “sealed off” within one small part of the system. I’m not suggesting that neurons really play such a tiny part in consciousness, of course, just illustrating the principle.

      Delete
    3. You didn't answer the question I asked, Inti, which is why must we always draw the borders of the black box around the periphery of single neurons? Could we not also draw the borders around pairs of neurons? It seems to me that we could take any two adjacent neurons and replace the two simultaneously by a single device that simulates any behaviour between the two of them and perfectly emulates their combined interface to the rest of the brain. Now extend that logic outwards and where does it lead you?

      Delete
    4. This comment has been removed by the author.

      Delete
    5. I’m going to try to answer this without addressing your main “systems reply” based position, but agreeing to differ on that as far as possible.

      I see your line of thought; though there are two interpretations of it.

      1. if we could replace ALL the neurons with computational substitutes, you might believe that we would have a completely computational device, and if one neuron would act as a black box, then they all would, in which case there is no reason to assume that the brain would lose consciousness. So the brain would be completely composed of computational units, and so would be one itself.

      2. The second variant is a bit like Dennett’s view described from bottom-up rather than top-down; we could keep on replacing larger assemblies of neurons with single unit computational black boxes with the same function, until we have one unit, the whole brain. (I think this is your point).
      QED for computationalism;? Or perhaps not.
      Version 1 would miss the point that the two levels, inside and outside the neurons, are sealed off from on another, and how one works has no implication for the other.

      My scenario is based on the “for the sake of argument” assumption that conscious arises ultimately from the interaction of neurons in a neural net, and that such interaction is a sufficient cause for consciousness. (In fact we don’t know that for sure; the neural net may be necessary but not sufficient, and it’s even conceivable, however unlikely, that Roger Penrose is right and neurons are not the critical components at all.)

      If that is the case, I’d contend for other reasons we’ve discussed previously, that this neural network is not in any strong or fundamental sense a computational device, in the usual sense of the word.

      (Again, it could be described metaphorically in computational terms. It could be simulated computationally, but that leaves open the central question as to whether this would be a true reproduction.)

      We can continue to view this as a physical, neurobiological system that operates in a qualitatively different manner to a computer. And this is perfectly consistent with the possibility that the neurons become internally computational, provided the input and output behaviour does not change. In this case, the only conceivable evidence of change within neurons would be a variation in input or output, and by stipulation that doesn’t occur, so the other parts of the brain cannot “see” into any one or more neurons.

      To put it another way, if neurons are completely multiply realisable, it doesn’t matter how they work internally for the working of the brain at a neural net level. They could be driven by steam for all we know or care. (The same argument applies at a deeper level ,; imagine that we gain a better and somewhat altered understanding of quantum-level processes in matter. Unless that implied different behaviour of this matter, it would be irrelevant to the behaviour or organisational nature of any system at a more gross level) So version 1 provides no new support for computationalism.

      And Version 2? This would only work if we presume that computationalism is true and my stipulated biological version is false. Version 2 describes the changes as crossing into and breaking up/ altering the process at the network level, which I stipulated was the level which gave rise to consciousness. Obviously you could change that stipulation, in which case we are no longer discussing my original setup, and you have simply reasserted to truth of computationalism, not demonstrated it.

      Delete
    6. Hi Inti,

      Version 2 is indeed the scenario I am trying to discuss. I may seem to be begging the question, but the same logic which forces you to accept version 1 (if the interface is the same then nothing has changed) would seem also to apply to version 2. If you reject this logic, as you might, then I don't see why you should be so confident that version 1 would be conscious. You're assuming it's physical connections between neurons that give rise to consciousness, but you have no reason I can see for preferring this assumption to the hypothesis that neurons need to be entirely biological.

      I agree that consciousness arises ultimately from interactions between neurons in a neural net. I just think it is very naive to think it matters whether the connections between neurons are implemented in software or in hardware. From a computer science perspective, software and hardware are equivalent, the only difference being in terms of performance. Furthermore, it doesn't matter if there is one computational device or billions. There is no information processing task that many computers working in parallel can do which cannot be performed by a single computer working step by step.

      But, you may say, this assumes that consciousness is an information processing task. True. But even if we assume that something beyond information processing happens when we have some large number of interconnected computing devices, we have a problem. It's not always clear whether we should say two devices are distinct or two parts of the same whole. An individual computer has many data buses, transmitting information from one register to another. Does that mean that each register is a separate interconnected device or is it one whole, one computer? Is the brain one device or billions? The computationalist answer is that it doesn't matter -- drawing distinctions between devices is arbitrary and can be of no consequence.

      Anyway, you didn't fully answer my initial question. Your position would appear to be that a wholesale replacement of your neurons by a black box would render you a philosophical zombie. Fine. So what if we just replace one hemisphere? Or one lobe? What if the process is very gradual, taking over your biological brain bit by bit? What would that feel like?

      This example to me illustrates the absurdity of Searle's position. By the terms of the thought experiment, you could not possibly notice anything, because noticing would involve a change in behaviour which we stipulated does not happen. This means that it is possible for you to gradually change into a philosophical zombie over the course of an hour or so without noticing that anything is happening. It also means we have to consider entities that are somehow half conscious though they appear normal and report feeling normal. If that doesn't give you pause I don't know what will, for both ideas seem profoundly absurd to me and prompt me to reject the distinction between the conscious states of a normal personal and a presumed philosophical zombie.

      Delete
    7. This comment has been removed by the author.

      Delete
    8. Starting with your question in paragraph four; “What would it feel like?” Searle seems to assume a slow decline, but not to believe that much depends on the question.. I would say we can’t tell; we don’t know enough. It might be gradual, like falling asleep or being anaesthetised. It might be a sudden lights’ out when some critical brain area was altered. It might involve loss of sight or sound, or the capacity to move, before the loss of consciousness. We can’t say, at present. (See my postebelow for an interesting sidelight on this, the “on-off switch article from New Scientist.)
      “Your position would appear to be that a wholesale replacement of your neurons by a black box would render you a philosophical zombie”. Not exactly. I think it’s just possible that we could replace every neuron by a computational black box substitute, without affecting consciousness. Doesn’t that imply computationalism? Not at all, provided that it is the noncomputational interaction of the neurons that gives rise to consciousness. If so, how things happen inside the black box neurons is irrelevant; it doesn’t alter the noncomputational processes in any way!

      But suppose we expand the boundaries of the elements that are replaced with computational substitutes? Well, perhaps on a very small scale the effect might be negligible. And it might be that there are some other elements that we could treat as black boxes, too. However, you can’t propose to extend the black box to encompass major parts of the brain, because if you do, you are removing –knocking out -the areas where the biologically causal non-computational interactions are taking place, and substituting non-causal computational elements instead. You were not doing that in the case of single neurons, because the critical causal interactions aren’t taking place within the neurons, but outside in the interactions amongst them. So” siliconising” outside the neurons would break the critical non-computational network. And you can’t just claim that the extra-neuron changes would still work in place of that network; this is precisely what is at issue; that claim would simply reassert computationalism’s basic claim in different words.
      In the first paragraph you say; “You're assuming it's physical connections between neurons that give rise to consciousness, but you have no reason I can see for preferring this assumption to the hypothesis that neurons need to be entirely biological.”
      To a large extent that’s an unresolved empirical question about how the brain works. I just made that assumption for the sake of argument. It does seem quite likely at first sight; we have good reasons to believe that electrochemical signalling between neurons in a complex network is at least a major part of the answer.
      Agreed, we can’t be sure that there aren’t other hidden interactions involved, between different neurons and/or between neurons and other brain elements. If that was the case, the neurons wouldn’t be black boxes after all, and the premises I’ve stipulated for the scenario change-, and my argument no longer applied.
      But then, I only claimed that it would apply IF the black box conditions held. Of course, we might go on to find that there are other sub-elements of the brain that really are black boxes, which would reinstate the same situation in a slightly different form. Or perhaps not, perhaps all elements are so intertwined that no black box is possible. In which case, that simply implies that there could not be any computational substitution anywhere without breaking the conscious system.

      Delete
    9. In case this is still unclear, let me put it a different way., My basic point is that, (a) if you can enclose a certain part of the brain that is not critical to the physical neurobiological process that produce consciousness, then you can make any changes you like within that element, provided (b) they maintain the same pattern of interactions with the outside of the enclosed part. On the other hand, this will not work under two conditions;

      1. The altered black box does not actually maintain the same interactions, so it doesn’t meet the condition (b) – that seems to be what you are suggesting at the end of your first paragraph (quoted above);
      2. If the enclosed part of the brain grows so large that the encroach into the parts where the core complex of physical neurobiological process are operating, in which case condition (a) above doesn’t apply.
      Conclusion: if conditions (a) and (b) do apply then it is possible to have islands of computational functioning within a larger non-computational brain. That’s all I set out to show.
      It doesn’t strengthen or weak computationalism at all, and so my assuming that computationalism is false in this black-box argument is not question-begging, since computation’s truth isn’t at issue here.
      All it might show is that if, hypothetically we found some indisputably, intrinsically computational (whatever that would mean!) sub-component in the brain, we would still need to prove that it’s function impinged on the core processes that produce conscious, and are not either secondary elements (“peripherals”?) or units sealed within a black box.

      And once more, if we violate (a) then we are just describing a fundamentally computational conscious brain, and thus returning to the main point at issues, leaving this black box discussion behind.

      I know I haven’t addressed some core issues in your previous posts, but I’m going to try to pull them together in one reply – when time permits.

      Delete
    10. Here’s an edited version of an interesting report in New Scientist

      http://www.newscientist.com/article/mg22329762.700-consciousness-onoff-switch-discovered-deep-in-brain.html?full=true (subscription needed for full text, but I can send it to you if you like.)

      Consciousness on-off switch discovered deep in brain

       02 July 2014 by Helen Thomson, Magazine issue 2976

      ...most agree that consciousness has to involve the integration of activity from several brain networks, allowing us to perceive our surroundings as one single unifying experience rather than isolated sensory perceptions.

      One proponent of this idea was Francis Crick, a pioneering neuroscientist who earlier in his career had identified the structure of DNA. Just days before he died in July 2004, Crick was working on a paper that suggested our consciousness needs something akin to an orchestra conductor to bind all of our different external and internal perceptions together.

      With his colleague Christof Koch, at the Allen Institute for Brain Science in Seattle, he hypothesised that this conductor would need to rapidly integrate information across distinct regions of the brain and bind together information arriving at different times

      ...

      The pair suggested that the claustrum – a thin, sheet-like structure that lies hidden deep inside the brain – is perfectly suited to this job (Philosophical Transactions of The Royal Society B, doi.org/djjw5m).

      ...

      Anil Seth, who studies consciousness at the University of Sussex, UK, warns that we have to be cautious when interpreting behaviour from a single case study. The woman was missing part of her hippocampus, which was removed to treat her epilepsy, so she doesn't represent a "normal" brain, he says.

      However, he points out that the interesting thing about this study is that the person was still awake. "Normally when we look at conscious states we are looking at awake versus sleep, or coma versus vegetative state, or anaesthesia." Most of these involve changes of wakefulness as well as consciousness but not this time, says Seth. "So even though it's a single case study, it's potentially quite informative about what's happening when you selectively modulate consciousness alone."

      This seems to be an example of a thought experiment becoming a real experiment. In Rediscovery of the Mind, Ch3 pp 74-5, Searle wrote: Suppose we had about such the possibility that neurophysiology might find a precise, neurophysiological isolable cause of consciousness in human beings, such that the presence of the relevant neurophysiological phenomenon was both necessary and sufficient for consciousness. He went on to discuss how this could be used to settle questions about the presence of consciousness in other animals at different phylogenetic levels.

      I’d point to this as an example of a more promising way forward in understanding consciousness.

      Delete
    11. A couple more relevant quotations from Searle:

      “Given a choice between the facts as we know them—consciousness exists, it is caused by neuronal processes, it exists in the brain, and it has causal functions in the life of the organism—and various philosophical theories, I will take the facts any time. Furthermore, I am confident that in the long run, the facts will prevail over the theories which will come to seem more and more obsolete. It is worth pointing out that practicing neurobiologists of my acquaintance, such as Francis Crick, Gerald Edelman and Cristof Koch, implicitly or explicitly accept a version of what I have been calling biological naturalism. They look to the operations of the brain to find an explanation of consciousness. It will probably take a long time before Biological Naturalism is generally accepted by the academic profession because we follow a long tradition of teaching our students the mistaken view that this is a philosophical problem of impossible difficulty.”
      J Searle, Biological Naturalism, 2004,
      http://socrates.berkeley.edu/~jsearle/BiologicalNaturalismOct04.doc

      If nothing else, this shows that Searle’s view is not a way-out peripheral approach, or on that is vague, ill-defined or sterile.

      Delete
    12. Now from Faigenbaum, Gustavo (2005-07-09). Conversations with John Searle (sorry about the odd fragmented references, Kindle won’t let me copy across page breaks)

      G. F.: How do you see the field of cognitive science today? Where is it going? What’s your assessment?

      John Searle:

      As you know, originally, cognitive science was founded on a mistake. It was founded on the mistake of supposing that the brain is a digital computer and the mind is a computer program. I’ve never believed that. The view that mental states could be defined in terms of their causal relations to input-stimuli, to other mental states, and to output-behavior, was an extension of functionalism in the philosophy of mind. I think that now the computational model of the mind in cognitive science is gradually fading out, and is being replaced by something I welcome, namely cognitive neuroscience. I think we’re now beginning, with the invention of fMRI (functional magnetic resonance imaging) techniques and other techniques, to get an understanding of the brain and how it functions in cognition, which we’ve never been able to understand before. So, I think cognitive science is more exciting than ever, but that’s because it failed. The original project failed. The project of a computational science of cognition was bound to fail. But we are making progress with brain science. (Kindle Locations 774-785)

      G. F.: Is there any positive content left to be rescued from that failure?

      John Searle:

      Sure. First of all, cognitive science overcame “behaviorism,” which had been the orthodoxy in psychology. “And the science of psychology on the behaviorist model was you were going to correlate these stimulus inputs with the behavioral outputs....the other view I call weak A.I. That is the view that you can use the computer to study the mind as you can use it to study anything. And we have a lot of useful computational studies of cognitive processes. You can do useful computational simulations and models of cognitive processes as you can do computational (Kindle Locations 786-804).)

      Delete
    13. G. F.: If you are right about the importance of these breakthroughs in neurophysiology, and we eventually come to know how the brain really works, does that mean that scholars who work on the philosophy of mind will become unemployed?

      John Searle:

      In fact , changes that are now going on in cognitive science make the philosophy of mind more interesting. Cognitive science was founded on the mistake that the brain is a digital computer and the mind is a computer program. It’s not the first time in history that a science was founded on a mistake. Chemistry was founded on alchemy, and cultural anthropology was founded on the false idea that people are infinitely malleable and variable. I don’t think that this mistake is fatal, but I do think that we are now moving into a new paradigm, away from computational cognitive science, to cognitive neuroscience. And I welcome that. Now we are investigating how the brain functions. And as we find out more about how the brain functions, we’re going to have a much more rigorous cognitive science. That seems to me not to put an end to philosophical discussion, but to create all sorts of philosophical problems that we didn’t have before. For example, I’ve just published an article in the Annual Review of Neuroscience, where I claim that a lot of the neuroscientific research is based on a false premise. It’s based on the premise that in order to understand how the brain causes consciousness, we should try to find out how it causes a single building-block of consciousness, such as the experience of red or a certain kind of sound. And I argue that the subject can only experience red if he or she is already conscious. He can only hear a sound if he’s already conscious. So I oppose what I call a building-block theory to the unified field theory. And I make a claim that the unified field of consciousness is what we should be investigating. And to investigate that, we have to investigate what’s the difference between the conscious brain and the unconscious brain. (Kindle Locations 809-823).

      Delete
  11. In the light of that, I was interested to read the Introduction to Jerry Fodor’s The Mind Doesn’t Work That Way: The Scope and Limits of Computational Psychology,.MIT Press, 2001. Fodor is generally seen as a paragon of computationalism, but he writes;
    “Over the years, I've written a number of books in praise of the Computational Theory of Mind (CTM often hereinafter). It is, in my view, by far the best theory of cognition that we've got; indeed, the only one we've got that's worth the bother of a serious discussion. There are facts about the mind that it accounts for and that we would be utterly at a loss to explain without it; and its central idea-that intentional processes are syntactic operations defined on mental representations-is strikingly elegant. There is, in short, every reason to suppose that the Computational Theory is part of the truth about cognition.' But it hadn't occurred to me that anyone could think that it's a very large part of the truth; still less that it's within miles of being the whole story about how the mind works. (Practitioners of artificial intelligence have sometimes said things that suggest they harbor such convictions. But, even by its own account, AI was generally supposed to be about engineering, not about science; and certainly not about philosophy.) So, then, when I wrote books about what a fine thing CTM is, I generally made it a point to include a section saying that I don't suppose that it could comprise more than a fragment of a full and satisfactory cognitive psychology; and that the most interesting-certainly the hardest-problems about thinking are unlikely to be much illuminated by any kind of computational theory we are now able to imagine. I guess I sort of took it for granted that even us ardent admirers of computational psychology were more or less agreed on that.”
    I’ll be interested to read his argument in full when I have time.

    Now to look at your latest comment under “Strong AI: The Story So Far”. I’ll be back…

    ReplyDelete
    Replies
    1. I can sympathise with this to an extent. If I convince you that computationalism is correct (of course I won't -- nobody ever changes their minds in discussions like these!), then I won't have solved the problem of mind for you. Computationalism is just an attitude about the fundamental metaphysics of mind. It doesn't tell you how exactly concepts are represented and it doesn't tell you exactly how the brain processes information. Indeed, generic computationalism doesn't even approximately answer these questions (though subcategories of computationalism such as connectionism may be more specific). There are a huge number of empirical questions to be worked out, but those are largely the domain of psychology, neurology and computer science. The whole story of how the mind works will involve a greater understanding in all these areas, but the strictly philosophical problem seems to me to be largely answered by computationalism.

      Delete
  12. Hi Inti.

    "It’s striking that each of us thinks the other is begging the question."

    I think that's a very common situation when discussing this particular issue. I agree with you that we are operating with very different intuitions and presuppositions.

    On the particular charge that I'm begging the question when I assume that qualia can be embedded in a concept map, I have to agree with you. I got ahead of myself. That is of course not at all persuasive to a non-computationalist. I was just describing how I see it without justifying my description. I can still make my point however using a less controversial statement. The issue was whether mental representations are like utterances, and I can still show that they may be quite unlike utterances by analogy to other forms of representation, such as relational databases etc, where we don't only have sequences of symbols but we have causal associations between them. The representation is a dynamic structure, and the shape of this structure imposes a kind of order absent in mere utterances. There is more than syntax at play. It is more like a mechanism than like words on a page. That said, please don't take from this that I think of mental representation as a relational database. It is much more complex, dynamic and sophisticated than this, while also being less robust.

    "including its critical failure to understand that all computational representations must be observer relative ... all representations are symbolic statements TO a conscious representational mind."

    So certain neural firing patterns in your brain are symbolic statements TO the conscious representational mind that is you and certain patterns of execution in an electronic artificial intelligence are symbolic statements TO a conscious representational mind that is an electronic artificial intelligence. I'm not failing to understand this at all. The problem is I think an AI can be conscious and you think it cannot. Both views are consistent as far as your statement goes.

    "Great, I agree with almost all of that description, but there is no necessary computational element in your description; it’s a biological, neurological process, through and through."

    In my view that's missing the point. Firstly, almost any process can be construed as computation, which Searle often gleefully points out. So any process at all is computational from a certain point of view, including this one. What is in question is whether it is useful to see it as computation, which is usually determined by whether the phenomenon will be reproduced if we implement it on another substrate. Photosynthesis is not, because simulated photosynthesis does not produce glucose. But what this example was discussing is the chain of cause and effect that links my mental representation of a car with an actual car, and the kind of thing I described can be reproduced in computers and robots, lending weight to the view that computer representations can actually have intrinsic meaning in the same way that neurological representations do. Where you say there is no need for computation, I say there is no need for biology.

    ReplyDelete
    Replies
    1. "The point is that nothing that is physically causal can ever be reproduced in computers, by any program no matter how sophisticated."

      Put it this way. If I see a car, I can say "I see a car". My seeing of the car causes me to say "I see a car" via a circuitous physical chain of cause and effect. But a computer can do just the same thing. A computer, connected to a camera and a speaker, can see a car and say "I see a car" via a circuitous chain of cause and effect. Assuming this computer has some sophisticated image processing and identification software to match images to classes of object, that computer has a representation of some sort of the concept "car". My point is, since this representation is causally connected to physical cars in just the same way as a mental representation, it is plausible to say that it intrinsically means "car". Furthermore, that representation was not necessarily put there by a human programmer. The system could have been trained to represent cars as a novel concept. It could easily be that no human would be able to look at its internals and figure out which bits represent which concepts (depending of course on how it is designed).

      "It is the representational structure that makes it computational, and representation is never a physical phenomenon."

      There is cause and effect at both hardware and software levels. These are two complementary ways of describing the same thing. The hardware level describes the physical chain of cause and effect that you're talking about, but the software describes the pattern of this sequence. Substrate independence or multiple realisability means that any physical system with such a pattern of cause and effect (including cogwheels or whatever) will be conscious in the same way. I don't see this as coming back to bite computationalism. Far from it.

      "To put it very crudely, causal brain processes actually push matter about, computation doesn’t, ever."

      Of course it does! Computers push electrons about. Cogwheels turn. Pipes push water about. Computation can be seen as a level of description above the physical. Every part of the computation has both an algorithmic and a physical cause, and there is no conflict here because they are perfectly complementary, like describing the decision-making of a person as a product of atoms bumping, neurons firing, or a desire to eat cheesecake. The question is whether consciousness depends on the particular substances pushing matter about or on a higher level account of the structure of the chain of cause and effect. The claim that computation doesn't push matter about is strange because most computationalists do indeed imagine that matter must be pushed about to achieve consciousness.

      "Associations, in your sense, are representations"

      Well, no, not really. They are more than this. They are causal links.

      "At best, representations can describe or depict processes."

      If I have a light and a button, where pressing the button toggles the state of the light, then the two are associated and I can even describe this association as an algorithm. This association is different from that between a switch and a light, where the state of the switch is associated with a state of the light.

      But the association between a toggle button and a fan is the much the same as the association between a toggle button and a light, so it is substrate independent. We can abstract away the structure of this association from a particular physical realisation and say that "toggle-control" is instantiated wherever such a causal association is implemented. Computationalism is just the view that consciousness is like "toggle-control" in this respect. It is instantiated wherever certain patterns of cause and effect are implemented, be they in brains or computers.

      Delete
    2. "It’s the underlying machine substrate that has even that power, and the program instruction only denotes that. Any error in this is all mine, not Searle’s, though"

      OK, well I won't blame Searle for that then. You're thinking of computer programs in terms of symbols and instructions, but I think of them as patterns of cause and effect. The symbols and instructions of computer code are just a means to the end of instantiating that pattern of cause and effect.

      "And the Churchlands, like you, claimed that Searle was looking at the wrong layer, or at the wrong scale, and they too missed the point that it is a qualitative difference, not one of scale or complexity, all the way through."

      This would be a fair comment if Searle had conclusively demonstrated a qualitative difference, but he hasn't. The virtual minds reply dismantles the Chinese room argument completely. All that is left to him are intuitions based on failing to imagine the quantitative differences between the simple systems he is capable of imagining and those that we would actually call conscious.

      "Only the history of how they came about would provide evidence."

      I strongly disagree with this. The genesis of a physical object cannot, except in a practical sense, determine what it is capable of. All that matters is how it is constituted. This idea that computers can be conscious if they evolved but not if they were programmed is very alien to my way of thinking. I understand your argument and Searle's that the intentionality we see in computers is only there because we perceive it to be, but I cannot understand the view that ascribing intentionality somehow prevents the manifestation of intrinsic intentionality.

      "I see that you think that philosophical zombies are logically impossible, assuming your strong version of computationalism. I think your are write, given your premise, but your premise is wrong."

      That's perfectly fair. So it is my view that philosophical zombies are useful as a way of articulating the difference in intuitions. I have often said to Massimo Pigliucci on Scientia Salon and previously on Rationally Speaking that I regard his rejection of both computationalism and the philosophical zombie argument to be inconsistent. David Chalmers seems to me to be the natural ally of the biological naturalists, though they often don't feel the same way.

      Cheers for the discussion, Inti. If all the typing and back and forth is too much I'd be up for a Skype conversation some time. You can contact me via the email form if you like.

      Delete
    3. "Only the history of how they came about would provide evidence."

      I strongly disagree with this….. the manifestation of intrinsic intentionality.


      I’m going to reply just to two points. I want to give the other more time. So I may have to go quiet for a while, but “I’ll be back”.

      Maybe Skype later, but at the moment, asynchronous discussion is easier., thanks.


      Obviously I wasn’t clear in my SF story. I’m not perpetrating the genetic fallacy. I think it’s quite reasonable to say that there is a crucial difference between a system, of any type, that came about without any intelligent design, on the one hand, and one which can produce the property of appearing to be conscious precisely because it was designed to produce that appearance. Even if a chatbot could perform faultlessly in the Turing test, I’d still be justified in assuming that it was using tricks, which allow it to manipulate text rather than to understand the content of the interchange. In that case, I’d say that it’s legitimate to say that knowledge of how the system came to exist is perfectly relevant

      So I’m not proposing that “that computers can be conscious if they evolved but not if they were programmed”. I’m saying that if I knew that it had evolved, I would know that there was no underhand trickery of the Eliza-but-much-more-sophisticated kind. I’d know that nobody was rigging the game. In that case, I would have empirical evidence that Searle is wrong, which was what I was discussing.

      I do realise that the kind of AI system you are defending is much more, in your view, than a super-Eliza chatbot.


      On ‘Intuitions’: This is one of those irregular verbs, isn’t it: I have rational insights, you have irrational intuitions!

      It’s popular with Searle’s opponents to say that he is only invoking intuitions. I don’t believe that’s true, but for now, I’ll just say that the reverse seems to be true for people who think like me; it is computationalism that is underpinned by intuitions and assumptions. I remember when the popular media often called computers “electronic brains”, because they were the first devices that could reproduce the powers of calculating and processing data, the exclusive preserve of human brains up to that point. How natural to assume that this could be extrapolated to every power of the brain. This dovetailed neatly with the behaviourist errors that dominated philosophy of mind and psychology at the time. Later, computationalists often resented themselves as paragons of hard-headed materialist reason as opposed to the “touchy-feely nonsense’ about the intrinsic importance of the first person viewpoint.

      I’m not claiming that this is the only basis of computationalism, of course; just that it is one cultural bias that is part of the basis of intuition that makes computationalist perspective seem credible.

      Here’s one example of a statement, by Jack Copeland in this case, that I’ve often seen or heard in computationalist attacks on Searle:

      “Few accept Searle's objection, but there is little agreement as to exactly what is wrong with it. “

      That ‘few accept’ is a biased perspective from within computationalist circles, of course, but I wouldn ’t deny that for many years that perspective dominated, rightly or wrongly.

      But doesn’t that sound like a classic case of intuition-based, motivated reasoning: “we know Searle is wrong, now let’s find some arguments that show what we already know”?

      http://www.alanturing.net/turing_archive/pages/reference%20articles/what_is_AI/What%20is%20AI14.html

      Incidently, when I first heard about the Turing Test, I did feel intuitively that it was wrong, despite being basically a computationalist at that point.. I couldn’t pin it down rationally, though. I was attracted to Searle’s CR room argument precisely because it goes beyond intuition.

      Delete
    4. I'm not going to deny that computationalists have intuitions too. Indeed there is an essay you may want to read coming out on Scientia Salon next Monday to that effect.

      Delete
  13. Hi Inti,

    Once again, and please don't lose sight of this, I'm talking about replacing ever-larger numbers of neurons with a single black box device which simulates all replaced neurons while preserving behaviour at the boundary of the device. I am not talking about replacing individual neurons with individual devices. From the premise of the Chinese Room, we can assume it is possible to simulate the behaviour of an entire brain, and so presumably smaller areas of a brain too. The question is not whether this is possible but whether doing so achieves consciousness.

    "Searle seems to assume a slow decline"

    If that is so, then Searle hasn't understood the question. If we stipulate that the interactions of the black box are identical to those of the replaced brain matter, then it is not possible that there could be any difference in reported experience.

    "on a very small scale the effect might be negligible"
    "You can’t propose to extend the black box to encompass major parts of the brain, because if you do, you are removing –knocking out -the areas where the biologically causal non-computational interactions are taking place, and substituting non-causal computational elements instead"

    Right. So if 2 neurons are replaced only, then we can assume the impact on consciousness is negligible. If a whole hemisphere is replaced, we can assume that the impact on consciousness is severe. Perhaps we have created a philosophical zombie? It would at least behave normally on the outside -- we have stipulated all behaviour at the interface of the black box is the same. But we would have replaced major parts of the brain responsible for consciousness. So does this mean we have created a philosophical zombie or a half-zombie? Is there a point of sharp transition, where we switch from being fully conscious to fully unconscious as the black box grows outwards -- a straw that breaks the camel's back?

    I imagine you don't know, and that is to be expected. But this is not a simple empirical question. The larger point is that this transition appears to be meaningless. There is no observer who could see the difference between a sharp transition and a gradual decline, not even the subject of the experiment. It is therefore clear to me that this is a distinction without a difference.

    "And you can’t just claim that the extra-neuron changes would still work in place of that network; this is precisely what is at issue; that claim would simply reassert computationalism’s basic claim in different words."

    Remember that on the Chinese Room, Searle is conceding the point that we might be able to make a computer system which gives the same inputs and outputs as a biological brain. If that is true for the brain, then I think it is not too controversial to propose that it is true for a portion of the brain. So to argue that it is not possible to make a black box which gives the same inputs and putputs as a continuous region of brain matter is to make an entirely different argument against not only strong AI but weak AI, and one that would seem to need some argument to back it up.

    I really think this neuron thought experiment is a vital one, and I do really want to know how you answer it because I want to know how Searle would and you seem to think alike. Unfortunately, you seem so far to have missed the point of it somewhat. Again, I'd suggest Skype at some point as a conversation might be more productive if we continue to talk past each other.

    ReplyDelete
    Replies
    1. It seems neither of us seems to be making sense of the other’s point.
      Mea culpa. I think I see why that is and what’s leading you astray and it’s at least partly my fault, in two ways. First, I hoped we could temporarily set aside some important disagreements like the system reply and the simulation vs reproduction argument, and still discuss this topic
      Maybe I made a related mistake in thinking I could use computationally loaded terms like input and output innocently, because I’ d hoped I’d already explained why, in the brain, you have to use terms “input” output” and “information” inside the same quote marks, indicating a useful fiction, like saying that evolution ” wants” to do this or that, or “has goals”. Those are loose shorthand, and if we took them literally, we’d be lead astray, That’s what does does happen with information processing talk in the context of mind and brain. I now see that I invited this misunderstanding.
      I hope I’ve made myself clearer, and brought out the misunderstandings here, You may disagree, please go beyond restating that t Searle is wrong and computationalism right; you need to show why my train of thought here is inconsistent. This doesn’t prove Searle’s perspective, it merely sets it out, but it does show why, within that framework, the neuron argument has no implications for Strong AI or Searle’s position.
      1. I’ve discussed the neuron as a black box. The point was to show that how it works internally has no implications for or against the computational or the biological naturalist theories. Many different realisations of a neuron might fit into either of these theories.

      2. The reason for this is that the way changes in the neuron appear to elements outside it remain the same whatever the internal implementation. I’ve stopped using the terms input and output, because I realise they carry computationalist assumptions that confuse the issue- see below.


      3. We already know from neurobiology that some parts of the brain appear to be implicated in consciousness. Let’s call those parts the “brain box” that’s just a conceptual schematic name.

      4. You argue that we could expand the black box arbitrarily to enclose any portion of the brain. In itself, that’s probably true. So you mistake my point when you say; “So to argue that it is not possible to make a black box which gives the same inputs and outputs as a continuous region of brain matter is to make an entirely different argument against not only strong AI but weak AI, and one that would seem to need some argument to back it up.” .That’s not what I’ve said.

      Delete
  14. 5. Now let’s focus on what happens inside the brain box. Here, biological processes combine to give rise to consciousness. We do not know that because we examine inputs and outputs, but because we are aware of it in our own minds, we have conscious thoughts, emotions, pains, tickles and so on. We are conscious, and it is a first person, subjective experience. We can’t coherently deny that exists, though we may be mistaken about the details; we feel as if we have a continuous visual field, like a movie screen, though we don’t; we may believe that we are in love, then realise that we weren’t. The one thing that we can’t deny or reduce away is the fact that we are conscious.


    6. I do not need any “inputs” or “outputs” from the brain box to know that I am conscious. I’m directly aware of it precisely because it is “me”; there is no gap over which communication needs to take place between my “brain box” and me; they are the same thing under different aspects.

    7. I will of course need some processes that could be called “inputs and outputs” to allow my senses to work and to move my body. Conceptually, these are interactions with other systems at the boundaries of the brain box. In the precise detail, it’s likely to be less sharply divided, but that’s not the point.


    8. Notice the qualitative difference between this brain box setup and that of the neuron’s black box. As far as we know at present, it’s the interaction with other neurons that are central to the neuron’s role In the case of the brain box, it is not interactions with other elements that define it, but the single process with two aspects, first-person and biological machinery, that goes on within it. That’s why expanding the black box doesn’t have the implications you claim.

    9. I’m puzzled by your claim that “The larger point is that this transition appears to be meaningless. There is no observer who could see the difference between a sharp transition and a gradual decline, not even the subject of the experiment. It is therefore clear to me that this is a distinction without a difference.” Of course there is an observer; the conscious individual concerned, at least until that consciousness is extinguished. I don’t understand how you can deny that. The whole nature of what we are discussing in this part of the discussion was; what is the first person experience?


    10. If we were to replace the working elements of the brain box with computational devices, we would kill its consciousnesss. We can’t be sure how that would seem to the victim, but it follows from biological naturalism.You can of course claim that the processes within the brain box are really computational ; that they are made up of computational elements, representations, inputs and outputs, information. All these are loaded terms, and if Searle is right, they all describe observer–relative features, and so can’t be claimed to give rise to consciousness without a vicious homunculus fallacy. Of course he might be wrong, but that’s not proved by asserting computationalism. They are at best convenient metaphors, like saying that evolution ”wants” to do something; useful as long as we remember that they are only metaphors. Many neurobiologists use seemingly computational language in that way, as a handy shorthand, while frankly admitting that that’s all they think they are. Cognitive scientists mistake the metaphor for descriptions of what actually happens.

    ReplyDelete
    Replies
    1. 11. Computationalism isn’t a very useful way to explain the mind. At first, it seemed to solve all sorts of philosophical difficulties; but Searle has shown that it is not necessary to solve them. Biological naturalism does the same task, more parsimoniously and with more connection to the results of neurobiology. Computer functionalism/computationalism/cognitivism is overdetemined and redundant.
      Your error about the nature of “input” and output is the same error that Searle complains of repeatedly, and the reason that computationalists just don’t get in his experience. They think “there must be some information in there!” No; information is a metaphor for the physical patterns themselves. { Improve this] So the biological naturalist approach doesn’t rely on input-output processing. Maybe I made a mistake in thinking I could use computationally loaded terms like input and output innocently, because I’ d hoped I’d already explained why, in the brain, you have to use terms “input” output” and “information” inside the same quote marks, indicating a useful fiction, like saying that evolution ” wants” to do this or that, or “has goals”. Those are loose shorthand, and if we took them literally, we’d be lead astray, That’s what does does happen with information processing talk in the context of mind and brain. I now see that I invited this misunderstanding.
      I notice that in Consciousness is not like Photosynthesis you make this point, which is half wrong, half right:
      “The reason is simple. The brain is an information processing machine,[Nope! See below] and so the only product the brain needs to make reliably is output. Output is not a physical product, it is information. Like a novel, which can be paperback or hardback. audio book or ebook, as long as the content is the same then nothing essential has changed.”
      This makes exactly the point I made before; output is not physical, it’s computational. I know that computations work on a physical substrate as you replied when I made this point myself, but the substrate isn’t the information; it’s just a contingent means to presenting it. In the same way, the information in a book is not the ink on a page, or the bits, or whatever .It is a pattern with purely observer relative meaning.That’s a basic proposition of cognitive science, from multiply realisability. So yes RIGHT: “Output is not a physical product, it is information”. And so it is observer dependent, so it can’t be the basis of consciousness without a homunculus to observe it!

      By the way, there are several reason why I prefer to discuss in written form, for now at least; asynchronous discussion being easier to maintain and fit round other commitments; I like to be able to think through and express my arguments as clearly explicitly as possible, which can be more difficult in conversation , and I kind of hope that someone else might find these discussions useful, someday! I do agree that the Blogspot format isn’t ideal; it breaks up the flow when an argument won’t fit into a single comment.
      --
      An example of discussion I wanted to postpone was the debate over the point that just as simulated fire won’t burm, simulated mind won’t be conscious. IIf that’s right it undermines computationalist assumptions. In reply you made the quite popular that, while fire and computer models of fire are made of very different stuff, the mind and a model of the mind are made of the same stuff; such as computional structures or information processing.. I now see that these contrary positions have come back to bite us, The meanings we are each assuming for some of our key concepts are radically different, I thought I’d already posted the first extract, but I can’t see it or find it by search so. I’ve posted it after this comment. with another quotation from Searle that I think is to the point.

      Delete
    2. This is from Critique of Cognitive Reason, which is chapter 9 of Rediscovery of the Mind, but is also online in an earlier form at http://www.federaljack.com/ebooks/Consciousness%20Books%20Collection/John%20R.%20Searle%20-%20The%20Critique%20of%20Cognitive%20Reason.pdf
      Further Difficulty : The Brain Does Not Do Information Processing In this section I turn finally to what I think is, in some ways, the central issue in all of this, the issue of information processing. Many people in the " cognitive science" scientific paradigm will feel that much of my discussion is simply irrelevant , and they will argue against it as follows :
      “There is a difference between the brain and all of the other systems you have been describing, and this difference explains why a computational simulation in the case of the other systems is a mere simulation , whereas in the case of the brain a computational simulation is actually duplicating and not merely modeling the functional properties of the brain . The reason is that the brain, unlike these other systems, is an information processings ystem. And this fact about the brain is, in your words , intrinsic ." It is just a fact about biology that the brain functions to process information , and as we can also process the same information computationally , computational models of brain processes have a different role altogether from computational models of, for example, the weather .
      So there is a well -defined research question : Are the computational procedures by which the brain processes information the same as the procedures by which computers process the same information?”
      What I just imagined an opponent saying embodies one of the worst mistakes in cognitive science. The mistake is to suppose that in the sense in which computers are used to process information , brains also process information .
      To see that that is a mistake contrast what goes on in the computer with what goes on in the brain . In the case of the computer, an outside agent encodes some information in a form that can be processed by the circuitry of the computer. That is, he or she provides a syntactical realization of the information that the computer can implement in , for example, different voltage levels. The computer then goes through a series of electrical stages that the outside agent can interpret both syntactically and semantically even though , of course, the hardware has no intrinsic syntax or semantics: It is all in the eye of the beholder . And the physics does not matter, provided only that you can get it to implement the algorithm . Finally, an output is produced in the form of physical phenomena, for example, a printout , which an observer can interpret as symbols with a syntax and a semantics. But now contrast that with the brain . In the case of the brain, none of the relevant neurobiological processes are observer relative (though of course, like anything they can be described from an observer-relative point of view ), and the specificity of the neurophysiology matters desperately

      Delete
  15. (continuing Searle extract) To make this difference clear, let us go through an example. Suppose I see a car coming toward me. A standard computational model of vision will take in information about the visual array on my retina and eventually print out the sentence,"There is a car coming toward me."
    [DM I know you’ve said that it is not about sentences, though computationalist like Fodor claim that it is; Still, the principle is the same in ANY computational system – it’s a purely symbolic response so Searle point still applies. Inti]
    But that is not what happens in the actual biology. In the biology a concrete and specific series of electrochemical reactions are set up by the assault of the photons on the photo receptor cells of my retina, and this entire process eventually results in a concrete visual experience. The biological reality is not that of a bunch of words or symbols being produced by the visual system; rather, it is a matter of a concrete specific conscious visual event- this very visual experience. That concrete visual event is as specific and as concrete as a hurricane or the digestion of a meal. We can, with the computer, make an information processing model of that event or of its production , as we can make an information processing model of the weather, digestion , or any other phenomenon , but the phenomena themselves are not thereby information processing systems.
    In short, the sense of information processing that is used in cognitive science is at much too high a level of abstraction to capture the concrete biological reality of intrinsic intentionality . The "information " in the brain is always specific to some modality or other. It is specific to thought , or vision , or hearing, or touch, for example. The level of information processing described in the cognitive science computational models of cognition , on the other hand, is simply a matter of getting a set of symbols as output in response to a set of symbols as input . We are blinded to this difference by the fact that the sentence, "I see a car coming toward me," can be used to record both the visual inten - tionality and the output of the computational model of vision . But this should not obscure the fact that the visual experience is a concrete conscious event and is produced in the brain by specific electrochemical biological processes. To confuse these events and processes with formal symbol manipulation is to confuse the reality with the model . The upshot of this part of the discussion is that in the sense of "information " used in cognitive science, it is simply false to say that the brain is an information processing device. . . .

    ReplyDelete
    Replies
    1. And from Faigenbaum, Gustavo (2005-07-09). Conversations with John Searle (Kindle Locations 949-966).
      John Searle: There is something, though, that I’m not completely satisfied about my own views . We need to think very seriously about the notion of information, and information processing. I argue in various places that it’s a mistake to think that the unconscious level of information processing that goes on the brain is literally information processing. That’s a metaphorical attribution to the steps the brain goes through, and whereby it produces intrinsic information in the form of thoughts and perceptions in the agent. But in the stages in between, though they impact in such a way as to cause for example a conscious perception, there’s literally no information. There is just a sequence of causes and effects. I think that’s right, but I think we need to say some more about it. I mean, why is it that the information processing terminology is so useful to us? Why does it seem to be so functional and so important when you’re describing how the brain is able to produce a visual experience, for example? Now, the visual experience, in my view, has intrinsic information because it’s got intentionality . It has intentional content. But the actual sequence of neurological events that lead from the retina to the visual experience doesn’t have any information. Those events are just blind processes that we can describe as if they had some mental content or intentional content. But they’re not literally that way. However, I think we need to know more . The fact is that the information processing terminology is very useful. G. F.: In developmental cognitive psychology, information processing is a very basic assumption. Yet it is not clear for me whether cognitive psychologists are referring to a neurophysiological, hard-wired process, or to an intermediate level of information processing, between the level of neurophysiology and the level of conscious, intentional experience.

      Delete
    2. Faigenbaum, Gustavo (2005-07-09). Conversations with John Searle (Kindle Locations 929-947). LibrosEnRed. Kindle Edition.
      John Searle; Well, originally cognitive science was based on the assumption that there was an intermediate level between the level of common sense intentionality and the level of neurobiology. And that was the level of the computer program. Now that’s false. I think I refuted that view. There is no such level. However, the fact remains that it is useful to describe a lot of the processes that go on at the neurobiological level as if they were thinking, as if they were doing information processing. Now I say that you need to distinguish between those features of the world that exist only relative to us, the observer-relative, and the observer-independent features. Now if the book is on the table, and I see the book is on the table, that’s observer independent. It doesn’t matter what other people think. But when people say that information about the book was already present in the lateral geniculate nucleus, that’s observer relative. They’re saying there’s some neuron firing in the lateral geniculate nucleus, which later on produces such and such information in the experience. I think that’s a harmless way of speaking provided you don’t think that there’s literally some intentionality going on at this lower level. There isn’t. Most people in cognitive science just don’t get it. They don’t understand what I’m talking about. They think it’s obvious there is information there. Don’t you see? The guy gets photons on a retina and later on he sees a book. There must be some information in between. It doesn’t follow, of course, but it’s very hard to get people to see that it doesn’t.
      G. F.: Therefore, your argument basically consists in a conceptual distinction.
      John Searle: Yes. If you’re going to do science, you have to understand the distinction between the features of reality that are observer independent and those that are observer relative. You can’t begin to do science without that.
      (My emphases)

      I really will have to give more time to other projects for a while, so I’ll have to try to resist the temptation to come back in the next couple of weeks. I suspect from a quick skim that I might want to what spare time I have to replying to the latest post on Scientia Salon, too. See you here later, though.

      Delete
  16. Hi Inti,

    OK, I understand that we're just going to assume that what happens within a neuron is not important for consciousness but what happens between neurons is. On this assumption combined with the rest of Searle's view then I agree that replacing all the neurons individually does not have implications for consciousness but replacing the whole lot does.

    So I can accept points 1-8 without too much complaint. Point 9 is where I think Searle's view falls apart.

    "Of course there is an observer; the conscious individual concerned, at least until that consciousness is extinguished. I don’t understand how you can deny that."

    Let me try to explain.

    We're stipulating that all the behaviour of the black box region is perfectly like a real brain. So, as the black box begins to encroach into the centers of consciousness (the brain box, as you call it), Searle's view suggests that consciousness begins to diminish in some way. The problem is that if this were happening to you, you could not possibly know it, because if you could know it you could report the change, and if you could report the change then your brain is behaving differently, which we stipulated is not the case.

    So, on Searle's view, you can transition smoothly from 100% conscious to 100% unconscious without ever noticing that anything is happening to you. If you can't notice, neither can anybody else. There is no observer who can detect the difference.

    I'm going to leave all the other points alone, for now, but I will just reassert my view that the brain is doing information processing, that it does accept input and produce output, and that computation is not observer relative on mathematical Platonism. I can see how some of Searle's arguments may be problematic for some computationalists, but not for my own idiosyncratic version which I feel is correct.

    ReplyDelete
    Replies
    1. Hi DM

      I think I’m getting the point, but I’m still not sure. Are you still thinking about the case when you expand beyond the black box neuron and start to encroach on the brain box level? In that case, I think you have missed part of my point. We have definitely not stipulated that the behaviour of the brain would remain unchanged. That stipulation only applied to the neuron.

      If I understand you, you are thinking that you could expand the black box to encroach in ever more of the brain box, and the stipulation about “inputs” and “outputs” would still apply to the expanded boundary, at whatever scale. But the point of my steps 5, 6 and 8 was to show why that’s not the case.
      The key point is this, for the neuronal black box, the critical factor is its interaction with other neurons, all of which extend outwards from the BB. For the brain box, the crucial factor for consciousness lies entirely within it. As you expand the black boxes’ boundary (BB for short) into the brain box, you are cutting off elements of the brain box’s internal process and hiding them from the remainder of the black box within the BB; they become inaccessible to the brainbox, sealed off within the BB.

      I can anticipate you saying “why should that be the case. How could the remaining parts of the brain box know that anything has changed?” The answer is, because we are now removing the biological level of interaction at the level where it is crucial to consciousness. You are replacing them with computational processes, which are only symbolic descriptions of what previously happened on a physical level. To allow the BB to extend to the level of the brain box without effect would be to grant computationalism as a premise.

      I suppose the most concise way to make the same point is; in the case of the black box, consciousness is on the outside looking in (or failing to); in the case of the brain box, consciousness is on the inside looking out.

      I think that’s right. However, I want to remind you that this is only a side issue. Nothing in the idea that neurons could be treated as black boxes is vital to biological naturalism. You could persuade me that it doesn’t work, in which case I’d say fine, then it has to be biological causation inside the neuron, as well. You could disagree, obviously, but then we are just returning to the core argument.

      Delete

    2. I have a genuine question to ask you. First, to clarify why I’m asking; I appreciate that you have an idiosyncratic twist on computationalism. However, leaving that aside for a moment, I had planned to ask you about your sympathy for connectionism in Scientia Salon.

      I have a feeling that there is a sharp divide down the middle of connectionism; on one side, it is still a variant of computationalism, it doesn’t matter how you arrange the computational elements as long as they are still computation i.e. algorithmic symbol processing and it’s not immune to Searle’s arguments.

      On the other side of the divide, the strongly “hardware based” version, it may be that it stops being computational at all, so that Searle’s response to the Many Mansion applies; fine, that might give consciousness, because it doesn’t assume that computation is a sufficient cause in itself. It might prove to have the same physical causal powers as biological systems.

      You suggest that Searle’s concept of computation is too narrow. I disagree: of course you could redefine the term to include the hypothetical “hard” second form of connectionism, but that would be changing the subject, rather like the move you called Coel on, in Scientia Salon. Short of that, I think that Searle’s various argument about syntax, thesymbols and the observer relative still apply to any form of digital data processing, which includes computational simulated connectionist machines.

      What I wanted to ask you was, which particular connectionist projects do you feel are promising. Can you give me some references on them, because I’d like to look at them in more detail, and see for myself whether they really do escape Searle’s criticism.

      I know there is another difference between us that I still need to investigate in more detail, at a later moment; you don’t accept the claimed distinction between the causal, physical nature of processes in the brain, and the non-physical, symbolic nature of computation. You think that must be wrong, if only because any kind of computer must have a physical substrate. I think that’s not the point, precisely because computationalism depends on multiple realisabilty and a functional, not a physical view of what computation is. If you are right, the argument that “simulation does not equal reproduction” probably fails in the case of the computer/mind. If I’m right, I have a feeling that the implications extend out into many of the attempted replies to the Chinese Room , and further.

      Delete
    3. Hi Inti,

      I can't tell you how much I value this conversation. I've been frustrated for a long time trying to discuss Searle's view with people who don't really get it, including people who call themselves biological naturalists.

      "Are you still thinking about the case when you expand beyond the black box neuron and start to encroach on the brain box level?"

      Yes. I'm talking about where the black box, a single device, begins to emulate and replace a greater and greater number of neurons, eventually taking over the whole brain. I am aware that this is not the same thought experiment Searle was discussing. It is a different one, and one much more challenging to biological naturalism.

      If there is a discrete part of the brain responsible for consciousness (and I'm far from certain of this), the black box will eventually encroach on this area and consciousness will start to fade (on your view at least). Even if there isn't a brain box, we're still looking at a smooth transition between consciousness and unconsciousness.

      "We have definitely not stipulated that the behaviour of the brain would remain unchanged."

      I am stipulating it.

      If you object to this stipulation, then you have a problem not only with strong AI but with even with weak AI. If the behaviour of the brain is thought of as either the signals it emits in response to sensory signals, or as macroscopic behaviour of an organism, weak AI claims that it should be possible to build a computational device which behaves exactly like the brain. Strong AI claims it is conscious.

      So I'm imagining biological neurons wired into an ever-growing black box. Everything within the black box is simulated perfectly so that from the point of view of any of the biological neurons nothing has changed. By the time the black box has engulfed the whole brain, you're left with a person that has a computer in their head but is outwardly normal. They have become in effect a philosophical zombie (on your view).

      "To allow the BB to extend to the level of the brain box without effect would be to grant computationalism as a premise."

      No. It would be to grant weak AI as a premise. Computationalism assumes that the black box would be conscious, but I'm only assuming that it behaves like a brain from the perspective of an external observer. So I'm making claims only about behaviour, not about consciousness. That's why I'm going along with your view that consciousness would be diminished.

      But it turns out that this view is absurd, and I think it is fatal for biological naturalism.

      Delete
    4. I imagine the divide you perceive in connectionism is only on the biological naturalist side. I doubt there are any serious AI researchers who think it makes any difference besides cost and efficiency whether neurons are implemented in software or hardware.

      I am not an expert on connectionist projects. I don't think there are any promising projects right now (i.e. there are no current research projects that look like they are going to produce a conscious computer). But there are lots of practical applications of connectionist approaches to performing tasks that brains are stereotypically good at but hand-designed algorithms are not.

      But I'm not an expert in the field, and I'm certainly not current on bleeding edge stuff.

      Delete
  17. I've had many similar discussions on syntax/semantics and Searle's Chinese Room! You can find more in inferring semantics from syntax by reading up on grammar inference/induction and semantic inference/induction

    ReplyDelete