I wanted to come out of blog dormancy to write up my thoughts on what I feel is a very important argument against computationalism. The argument advances the view that there is no objective fact of the matter about which computations a physical system is computing, and if this is the case it would certainly seem to problematise computationalism (the view that what it is to be a conscious mind is just to perform the right kind of computation).
In this post I will explain the argument and some of the common responses to it. I'll reserve my own response (which is quite different from that of most computationalists) for a future post.
The basic idea is not so new, going back at least to the late 80's and early 90's with arguments from Putnam[1] and Searle[2]. Searle neatly captures it as follows:
On the standard textbook definition of computation, it is hard to see how to avoid the following results:
Thus for example the wall behind my back is right now implementing the Wordstar program, because there is some pattern of molecule movements that is isomorphic with the formal structure of Wordstar. But if the wall is implementing Wordstar, then if it is a big enough wall it is implementing any program, including any program implemented in the brain. (pp. 208-9)
- For any object there is some description of that object such that under that description the object is a digital computer.
- For any program and for any sufficiently complex object, there is some description of the object under which it is implementing the program.
Searle's treatment of the problem is left rather vague. Perhaps many computationalists would be happy to dismiss such an idea as preposterous or absurd. However, Putnam justified in detail a very similar but more precise claim: that any open physical system can be interpreted as implementing any finite state automaton (FSA), where FSA is presumably more or less Searle's textbook definition of computation.
Because Putnam's treatment is so rigorous, let's focus on that for now and begin with an introduction to Finite State Automata (feel free to skip ahead if you are already familiar with the concept).
Finite State Automata
Firstly, I think it's important to point out that the FSA model is strictly less powerful than that of the Turing machine, but only because it has a finite set of possible states while the Turing machine has an infinite amount of tape to work with. As such, it ought to be possible to model any realizable digital computation with an FSA, and indeed it could be said to be a more realistic model than the Turing machine precisely because the set of states is finite.An FSA is always in precisely one state at a time, which means that right from the outset it diverges quite significantly from how programmers would tend to think of algorithms and computation, where state is really a complex vector composed of variables and data structures. Conversely, the state of an FSA doesn't really have any content, only simple rules regarding how and when it transitions to the next state and what output to produce in each state.
Let's make this a little more concrete, with an unrealistic toy example of a shop system where if the value of the float in the till is less than £20 and the till takes cash, then we need to note that we are short of cash in that till in order to pay out change. A programmer might say something like "IF $till.float < 20 AND $till.type = 'CASH' THEN SET $till.shortOfCash = true", where the dollar signs denote variables being read from and written to. However, in an FSA there are no such variables. Something like this logic would instead be "state A goes to state B", where state A corresponds to the state where the till has less than £20 and accepts cash but we have not yet noted that the till is short of cash, while state B is the state where the till has less than £20 and accepts cash but we have recorded that the till is short of cash.
Suppose that when a till is short of cash a warning light flashes. In the programmer's model, this light would be associated with a variable such as $till.shortOfCash. In the FSA model we would just associate this output with certain states (such as B) and not others (such as A).
The number of variables programmers use in any reasonably complex software is quite large, and the number of possible values each variable could hold is frankly enormous. When you consider the number of combinatorial possibilities we have for all these variables, the number of possible states quickly expands to ridiculous proportions. So in most computations, the number of actual states you would need to account for in your FSA state transition tables is vast (to put it mildly). This is why even though the FSA is useful as a mathematical abstraction of computation it is worthless as a programming paradigm.
All the same, it is not hard to see that any computation can be described as an FSA. At any given time, a physical computer is in a particular state, that is all of its registers and memory addresses have particular values, and the ensemble of values constitute an identifiable state we can label. Forgetting input for a moment, the next state the computer visits and whatever output it might produced is completely determined by the current state. An FSA model would just list all possible states by their labels and what successor states and output they produce. This ought to be enough to capture anything a computer could ever do. Even input can be incorporated if we simply treat it as part of the state. As such, if any algorithm can pass the Turing Test and do all the information processing tasks a human brain can do, then so could an FSA.
(Aside: Searle is often ridiculed for implying that his Chinese Room thought experiment could be implemented with something like a big lookup table matching questions with responses, but consideration of the FSA model implies that in principle he could. With the right FSA table, every interaction with the room would put the FSA in a novel state and so he could mimic the ability of a conventional algorithm to learn and change and give different responses to the same input at different times. Right away we start to see that conventional computationalist attitudes such as faith in the Turing Test as a detector of consciousness are in trouble, because it is very hard to see how something as simple as a big lookup table could produce consciousness. All Searle would need to do to answer a question is to look up a big dictionary for his current state (he would have one such dictionary for each state) mapping input Chinese to output Chinese and identifying the next state. It is often assumed that Searle would need to take a long time simulating neurons and so on, but if he has these dictionaries then that is not so. There is a tradeoff of time versus memory, though, because the number of dictionaries he would need and the size of each dictionary is frankly absurd.)
Putnam's Rock
I've discussed how we can interpret a computer as implementing an FSA. The problem for computationalists is that we can perform much the same kind of interpretation on any open physical system (e.g. a rock) and map this to any FSA we like! Like a computer, the state of any physical system is defined as the ensemble of microstates we can define in the system (instead of registers, we might use the disposition and charges and momenta of all the atoms in the system). And like a computer, each state causally depends on preceding states. If we call the state at time 0 A, and the state at time 1 B, then we have shown that the physical system implements the FSA "A goes to B" and so it can be said to perform the example till-related computation we had above. All that's missing is the output, however if we had the ability to scan the microstate of the physical system then producing the output corresponding to that logical FSA state would not be difficult.Besides, I'm not convinced that either input or output need be of much concern. We can imagine a computer simulation of a rich virtual environment hosting putatively conscious AIs but which has no input or output. If we can map the states of this computation to the states of a physical system such as a rock, how can we justify our intuition that the computer really is running the computation (and so hosting conscious entities) but the rock is not?
There are a number of angles one could use to question Putnam's result, some more successful than others. I'll get to those a little later, but for now I'll just note that, superficially at least, Putnam would appear to be correct. If to instantiate a computation is just to be describable as the right kind of FSA, then it would appear that every physical system is performing any computation you could care to mention.
Bishop's Pixies
John Mark Bishop has published a number of papers ([3],[4],[5],[6]) which expound on the implications of Putnam's result and related arguments, also answering some challenges that have been raised. They are well worth a read. If you don't have the time, the video above may be of interest.
Bishop interprets Putnam's result to mean that computationalism demands that every physical system is host to a multitude of conscious minds (which he humorously refers to as 'pixies'), by simple virtue of their natural evolution through a succession of distinct states. Since a computationalist believes that to be a conscious mind is just to implement the right kind of computation, and since any physical system is implementing any and all computations simultaneously (depending only on how you interpret it), then all possible conscious minds must be instantiated simultaneously in every rock. For Bishop, this is the most absurd kind of panpsychist position imaginable and so demonstrates that computationalism must be false.
I'm not sure that Bishop really adds much to Putnam's original argument, but he has done a great job of explaining it and advocating it in recent times, as well as taking an interest in addressing objections. For these contributions he is to be commended.
Bishop concludes with Searle and Putnam that computation is very much in the eye of the beholder, that there is no objective fact of the matter about which computations a system may or may not be implementing. To illustrate this point, Bishop raises the example of a truth table in digital logic such as the following.
A | B | Output |
---|---|---|
0 | 0 | 0 |
0 | 1 | 0 |
1 | 0 | 0 |
1 | 1 | 1 |
However, though computation seems to be in the mind of the observer, it is almost universally agreed that there is an objective fact of the matter about whether certain physical systems (such as human beings) are conscious or not.
I should emphasise here that we are not talking about subtle gradations between categories. Many of us are comfortable with the idea that there is a grey area in the spectrum from trivial unconscious information processing in bacteria to the complex conscious information processing in humans. The point is whether we can look at a system and objectively place it somewhere along the spectrum. For example, it is usually assumed that a healthy alert adult human is definitively at the conscious end of that spectrum.
An objective fact cannot be explained by a subjective interpretation. It is not plausible, for instance, that a virus could evolve which would kill only beautiful people and leave ugly people alone. Neither is it plausible that it would take less energy to broadcast good TV than bad TV. The very idea is a kind of category error, because people may legitimately disagree on who is beautiful or ugly, what TV is good or bad, but who lives or dies or how much energy is required for broadcasting are objective facts about which well-informed people should not disagree. In the same way, it is not plausible that we can explain the objective fact of consciousness by appeal to subjective interpretations of systems as computers.
Computationalist Responses
A number of responses from computationalists have attempted to resolve the problem, some more successfully than others, in my view. I will mention some of those I find interesting. The first three of these come from conversation with a computationalist friend of mine (Mike Smith over at SelfAwarePatterns). To be honest, I don't think they represent tenable philosophical answers to the problem but they probably are representative of attitudes that may be common among the lay community of computationalists and perhaps explain why there are still so many computationalists around despite the DwP argument.
Accepting the pixies
Mike at one point suggested that the best thing to do might be to accept the existence of the pixies. After all, there is little reason to suspect that our intuition in such matters is likely to be very useful. If it seems absurd, so what? Nature is under no obligation to be sensible. It certainly doesn't seem to be very sensible when it comes to fields such as quantum mechanics, so why should this be any different?Bishop for his part is happy to let that speak for itself. If you are really willing to defend a view so bizarre, good luck to you! But most computationalists (and I with them) do not think that this is acceptable. If Bishop's "Dancing With Pixies" (DwP) argument is correct, not only are we surrounded by pixies, but the vast majority of conscious experience is realised in pixies, and so we ourselves are almost certainly pixies. Furthermore, though there must be a real world, we can say almost nothing about it. There is no reason to believe it bears any resemblance at all to our apparent environment, and indeed it may be something as trivial as a pair of particles separating from each other forever. Any system which has identifiable non-repeating states is a candidate.
Though we can't absolutely rule this out as a possibility, it is my view so absurd as to overrule any possible reasons for clinging to computationalism.
Rejecting the pixies
Mike was also happy to suggest that it might be best to reject the pixies out of hand, on the basis that the interpretation of a natural system as implementing a particular algorithm is too absurd. On this view, it is really the interpretation that is doing all the work and so it is not too worrying to suppose that we can force such interpretations if we wish -- the act of forcing the unbelievably complex interpretation would be what realises the computation and brings the pixies to life.
I don't think this is a satisfactory answer because the absurdity of the interpretation shouldn't matter if we never have to instantiate it for the pixies to exist, and according to the logic of the DwP argument they should exist even without such an interpretation. Actually building the interpretation itself is just to build a very complex input/output apparatus, but the computation should be taking place regardless.
If you want to say otherwise, then you should be able to draw a sharp distinction between what a rock is doing and what a computer without input/output is doing. The point of the DwP argument is that no such sharp distinctions are possible. There are only degrees of absurdity/complexity versus naturalness/simplicity in our interpretations.
Without an objective way to quantify or measure absurdity of interpretation, and without a natural threshold to mark the border between actual computations and Bishop's absurd phantom computations, it seems we're in trouble. Again, if we take the existence of conscious minds to be objective fact, and if we rule out the idea that a given conscious mind (one like yours or mine) could half-exist, then it's hard to see how something like a computation which exists only to a certain degree (according to the naturalness of an interpretation or the usefulness of regarding it as a computation) can account for the absolute existence of a mind.
The subjective existence of mentality
Another approach mooted by Mike was to adopt the view that since a conscious mind only really exists from its own point of view, it is wrong to assume there is an objective fact of the matter about whether a conscious mind exists in a given physical system. If the existence of a mind were subjective, then it might not be such an issue that the existence of a computation is observer-relative.
I don't think many computationalists will be satisfied with this approach. David Chalmers (perhaps echoing Descartes and "I think therefore I am") likes to point out that the one thing any of us can really be confident in is the fact of our own conscious experience. For this reason, it is not plausible to suggest that we don't really objectively exist or that our consciousness is an illusion. Conversely, if all it is for a mind to exist is for it to exist from its own perspective, then we must accept the first-class existence of fictional minds such as those of Han Solo or Mickey Mouse, something few computationalists would be willing to do.
But perhaps we should interpret "subjective" in this sense to mean only private. There is only one perspective that can ultimately observe a conscious mind to exist, and that is that conscious mind itself. But there is still a (hidden, presumably) objective fact of the matter that the mind exists, even if this fact is not public. But now we're back where we started, with a private objective fact depending for its existence on a public subjective interpretation, an impossible scenario which simply doesn't work.
Or at least that's what I feel Hofstadter is trying to say. I don't think Bishop interprets him quite right because Bishop's response is to argue that there is nothing stopping us from running an algorithm twice on the same input. The first time, we don't know what the algorithm will do and so we might suppose that a particular algorithm produces consciousness. The second time we run it, we do know exactly what it will do, but of course it must still be conscious -- the mere fact that we know what it will do cannot change this, surely. If knowing what it will do in advance doesn't rob the computer simulation of consciousness, why should it rob Searle's wall of the same?
But of course, running the algorithm twice is only admitting Hofstadter's problem. We can determine mappings for computers without having to do that, but not for walls or other natural objects. On the other hand, it would seem that the logic of the DwP only depends on the logical necessity of the existence of such a mapping, so whether we are in a position to tell what it is in advance may not be important. Even so, if this epistemological difference could be made precise, it might constitute an objective difference in the two kinds of mapping, and that's all we need as grounds for distinguishing "real" computations from "pixie" computations.
Even though this feels to me like it might be the start of a legitimate objection, perhaps Bishop and I are giving Hofstadter too much credit, as his argument is really not very clear. Indeed, the particular passage this criticism comes from doesn't give much in the way of actual argument at all. What we find instead is rhetoric, little more than a list of incredulous comparisons to reading works of literature or classical music in the random structure of natural objects around us, intended to ridicule the very idea of interpreting natural systems as instantiating arbitrary computations. (In fairness to Hofstadter, he is responding to Searle's vague rhetorical argument rather than Putnam's more precise formal argument, so this kind of response is not unreasonable). In this respect, I think Hofstadter's criticism misses the mark because no objective metaphysical fact hinges on whether "the holes in pieces of Swiss cheese code for the entire history of the United States". Yes, a mapping does exist to make this true, and yes, this mapping is absurd and arbitrary, but nobody thinks the instantiating of a representation of the history of the United States objectively brings something qualitatively new (such as a mind) into existence, so this is not a fair comparison. We are not troubled by our inability to say definitively when and where an account of the history of the United States is instantiated, but we ought to be troubled by our inability to say definitively when and where conscious algorithms are instantiated.
However Hofstadter does point the way to a more substantive objection when he says "minds worth calling minds exist only where sophisticated representational systems exist, and no describable mapping that remains constant in time will reveal a self-updating representational system in a car engine or a liver".
This idea of a stable mapping that remains constant in time is perhaps a plausible angle to investigate. I think what he's getting at here is that we should not admit mappings that are so arbitrary as to only map particular runs of an algorithm on particular inputs to a particular period in the evolution of a physical system. Legitimate mappings should be general and extend to any possible inputs and for an indefinite period of time in the evolution of a physical system (and not just a bracketed temporal window). This kind of objection is taken up in more detail by David Chalmers.
One important reason is that an actual digital computer is adaptive and dynamic. It is capable of performing computations not on only on the input it actually receives, but it would also have performed sensible computations on counter-factual input that it didn't receive. Putnam's mappings, on the other hand, are brittle. They only cater for one particular series of inputs, one particular run of a program, and have undefined behaviour in other circumstances.
Another side of this coin is that Putnam's mappings are brittle with respect to the physical circumstances of the system. Putnam mappings can only be made retrospectively on evolutions of systems that have already happened and been recorded. Counterfactually, had the system's state diverged even a little from that specified in Putnam's mapping, then Putnam's mapping would fall apart. The mappings we have for actual computers are robust in that we can say things like "had the voltage in this register been such and such, then that would have corresponded to such and such a logical state".
In other words the relationship between physical states and logical states for computers is lawful and robust, in that they cater for a very wide number of logical and physical circumstances (as long as the computer remains intact, at least) but Putnam mappings are brittle and depend entirely on happenstance. This being the case, it would seem we have an objective difference between the two and so perhaps Putnam's phantom computations should not be regarded as genuine.
To answer this objection, Bishop cleverly makes use of a version of Chalmers' own Fading Qualia Argument (FQA). The original FQA was deployed to illustrate the absurdity of biological chauvinism by postulating that brain cells might be replaced iteratively by electronic functional analogues. If we assume that electronics cannot be conscious, we start with a conscious being and we end with an unconscious being that behaves in precisely the same way. It would seem that during this process, qualia somehow fade out gradually, so that at the halfway point the being is only half conscious, only half-perceiving sensory qualia and so on, while being unaware that anything untoward is happening. Chalmers (correctly, in my view) takes this to be absurd, and concludes that consciousness must be a functional phenomenon.
Bishop's version of the FQA considers a different transition, that of a robot or simulated entity transitioning from full dynamic implementation of an algorithm to one scripted to proceed deterministically and inevitably from one state to the next (just like a Putnam FSA in a rock). At each step in the transition, we simply replace a single conditional branch with a hardcoded state change, so that the transition is perfectly smooth. Again, the behaviour of the algorithm run on the same input is unchanged, and again we are apparently left with the absurd conclusion that the half-way point has a being that is half conscious, half perceiving qualia and so on.
For what it's worth, I don't think this version of the FQA is quite as absurd as Chalmers' original. I can imagine the half way point might consist of an entity that flickers between consciousness and zombiehood as it alternates between executing conditional and hardcoded state transitions. It's still strange, but not quite as weird as imagining what it would be like to half-perceive qualia while being fully functionally aware of them.
Chalmers (and also Chrisley [10]) points out that this need not be so surprising. Deleting these conditionals will inevitably correspond to a physical change in the system, and this difference might make the difference in considering whether consciousness is brought forth.
To this point, Bishop answers that we can instead imagine leaving all the conditional statements in place and instead simply delete (or replace with null operations) the code that will not execute. Now, Bishop argues, the code that actually executes is the same and so we can no longer appeal to a physical change in the system to explain why consciousness might fade.
I'm not sure this argument succeeds for a couple of reasons. Firstly, I don't think it is possible to delete code without having a physical difference, and as long as there is a physical difference in the system then it is possible to point to that as accounting for the difference between conscious and unconscious systems. Secondly, it's not clear to me that a system with conditionals but with deleted code really corresponds very well to a Putnam style FSA which has no conditionals at all. On the other hand, however, it's hard to credit that the physical presence of code never executed is crucial for consciousness.
All in all, I'm left with the impression that this whole line of argument is inconclusive. Despite the loose ends, my sympathy actually lies with Bishop but I think reasonable people might with some justification disagree on whether he has proven his case.
However, Chalmers also presents a related argument which to the best of my knowledge Bishop has not yet addressed. This is the argument that the FSA is just one model of a computation and not necessarily the best one for our purposes. That is, to implement an FSA may not really be all that is required to perform a genuine computation (despite assumptions to the contrary apparently dating back to Turing). To justify this, Chalmers appeals to that difference I noted earlier between the abstract FSA and how computer programs are implemented in practice -- that is that the state of an actual computer has fine-grained content, typically divided into variables and stack pointers and the like, and that there exist meaningful, lawful and causal relations between these sub-states. This is not the case for an FSA where state is associated only with a label and transitions to other states. Chalmers points out that it is possible to build an abstract model of computation that respects not only state transitions but also the content of particular states, and labels this model the Combinatorial State Automaton (CSA). In contrast with FSAs, it is not clear at this time that it is possible to build a mapping between natural physical systems and arbitrary CSAs. If it is not possible, then the computationalist can claim that what it is to be a conscious entity is to implement the right kind of CSA, and that brains and the right kind of AI might do so while rocks would not.
Chrisley [9] has made similar points, arguing that Putnam's account of causality is too weak, and that unlike a real computer, the state transitions in his projected FSAs are not strongly causal, that is, the physical system being in logical state A does not really cause its transition into logical state B. Again, the gist seems to be that simply implementing (or being interpretable as) an FSA is not enough to be a genuine computation.
I think these kinds of argument are plausible, but, as noted, it does depend on the impossibility of mapping natural systems to arbitrary CSAs or better accounting for causality in such mappings. In my quote from Searle, he doesn't assume an FSA mapping, and he doesn't assume that counterfactuals are ignored. He only assumes (albeit without justification) that there is some mapping between the state of the system and the operation of an algorithm. It's entirely possible that he is right, no matter what model of computation we adopt or how much stock we place in the mapping of counterfactuals. Searle and his followers take it for granted that his assumption is correct and most computationalists would seem to assume that it is false. Again, my sympathies on this one probably lie with Searle, but it's far from a knock-down argument against computationalism.
I don't think many computationalists will be satisfied with this approach. David Chalmers (perhaps echoing Descartes and "I think therefore I am") likes to point out that the one thing any of us can really be confident in is the fact of our own conscious experience. For this reason, it is not plausible to suggest that we don't really objectively exist or that our consciousness is an illusion. Conversely, if all it is for a mind to exist is for it to exist from its own perspective, then we must accept the first-class existence of fictional minds such as those of Han Solo or Mickey Mouse, something few computationalists would be willing to do.
But perhaps we should interpret "subjective" in this sense to mean only private. There is only one perspective that can ultimately observe a conscious mind to exist, and that is that conscious mind itself. But there is still a (hidden, presumably) objective fact of the matter that the mind exists, even if this fact is not public. But now we're back where we started, with a private objective fact depending for its existence on a public subjective interpretation, an impossible scenario which simply doesn't work.
Douglas Hofstadter - A posteriori cheating
Moving on to what other academics have said on these issues, Douglas Hofstadter (a man I admire quite a bit) has suggested [7] that the kind of approach Putnam takes to mapping FSAs onto a physical system is cheating because it requires a priori knowledge of the evolution of the algorithm, something which in fact can only be known a posteriori after running the algorithm. This is not a real mapping of physical states to logical states, according to Hofstadter. A real mapping would be one we could produce a priori as we do for computers. without having to run the algorithm. This points to a potential difference between Putnam's mappings and the kind of mapping we naturally adopt for computers, a difference that could account for consciousness, perhaps.Or at least that's what I feel Hofstadter is trying to say. I don't think Bishop interprets him quite right because Bishop's response is to argue that there is nothing stopping us from running an algorithm twice on the same input. The first time, we don't know what the algorithm will do and so we might suppose that a particular algorithm produces consciousness. The second time we run it, we do know exactly what it will do, but of course it must still be conscious -- the mere fact that we know what it will do cannot change this, surely. If knowing what it will do in advance doesn't rob the computer simulation of consciousness, why should it rob Searle's wall of the same?
But of course, running the algorithm twice is only admitting Hofstadter's problem. We can determine mappings for computers without having to do that, but not for walls or other natural objects. On the other hand, it would seem that the logic of the DwP only depends on the logical necessity of the existence of such a mapping, so whether we are in a position to tell what it is in advance may not be important. Even so, if this epistemological difference could be made precise, it might constitute an objective difference in the two kinds of mapping, and that's all we need as grounds for distinguishing "real" computations from "pixie" computations.
Even though this feels to me like it might be the start of a legitimate objection, perhaps Bishop and I are giving Hofstadter too much credit, as his argument is really not very clear. Indeed, the particular passage this criticism comes from doesn't give much in the way of actual argument at all. What we find instead is rhetoric, little more than a list of incredulous comparisons to reading works of literature or classical music in the random structure of natural objects around us, intended to ridicule the very idea of interpreting natural systems as instantiating arbitrary computations. (In fairness to Hofstadter, he is responding to Searle's vague rhetorical argument rather than Putnam's more precise formal argument, so this kind of response is not unreasonable). In this respect, I think Hofstadter's criticism misses the mark because no objective metaphysical fact hinges on whether "the holes in pieces of Swiss cheese code for the entire history of the United States". Yes, a mapping does exist to make this true, and yes, this mapping is absurd and arbitrary, but nobody thinks the instantiating of a representation of the history of the United States objectively brings something qualitatively new (such as a mind) into existence, so this is not a fair comparison. We are not troubled by our inability to say definitively when and where an account of the history of the United States is instantiated, but we ought to be troubled by our inability to say definitively when and where conscious algorithms are instantiated.
However Hofstadter does point the way to a more substantive objection when he says "minds worth calling minds exist only where sophisticated representational systems exist, and no describable mapping that remains constant in time will reveal a self-updating representational system in a car engine or a liver".
This idea of a stable mapping that remains constant in time is perhaps a plausible angle to investigate. I think what he's getting at here is that we should not admit mappings that are so arbitrary as to only map particular runs of an algorithm on particular inputs to a particular period in the evolution of a physical system. Legitimate mappings should be general and extend to any possible inputs and for an indefinite period of time in the evolution of a physical system (and not just a bracketed temporal window). This kind of objection is taken up in more detail by David Chalmers.
David Chalmers - counterfactuals and CSAs
David Chalmers wrote a very detailed and thoughtful response to Putnam in his paper Does a Rock Implement Every Finite-State Automaton? [8]. The paper covers a lot of ground and is highly recommended, but for our purposes the most interesting idea is that a Putnam style mapping fails to be a genuine computation for one reason or another.One important reason is that an actual digital computer is adaptive and dynamic. It is capable of performing computations not on only on the input it actually receives, but it would also have performed sensible computations on counter-factual input that it didn't receive. Putnam's mappings, on the other hand, are brittle. They only cater for one particular series of inputs, one particular run of a program, and have undefined behaviour in other circumstances.
Another side of this coin is that Putnam's mappings are brittle with respect to the physical circumstances of the system. Putnam mappings can only be made retrospectively on evolutions of systems that have already happened and been recorded. Counterfactually, had the system's state diverged even a little from that specified in Putnam's mapping, then Putnam's mapping would fall apart. The mappings we have for actual computers are robust in that we can say things like "had the voltage in this register been such and such, then that would have corresponded to such and such a logical state".
In other words the relationship between physical states and logical states for computers is lawful and robust, in that they cater for a very wide number of logical and physical circumstances (as long as the computer remains intact, at least) but Putnam mappings are brittle and depend entirely on happenstance. This being the case, it would seem we have an objective difference between the two and so perhaps Putnam's phantom computations should not be regarded as genuine.
To answer this objection, Bishop cleverly makes use of a version of Chalmers' own Fading Qualia Argument (FQA). The original FQA was deployed to illustrate the absurdity of biological chauvinism by postulating that brain cells might be replaced iteratively by electronic functional analogues. If we assume that electronics cannot be conscious, we start with a conscious being and we end with an unconscious being that behaves in precisely the same way. It would seem that during this process, qualia somehow fade out gradually, so that at the halfway point the being is only half conscious, only half-perceiving sensory qualia and so on, while being unaware that anything untoward is happening. Chalmers (correctly, in my view) takes this to be absurd, and concludes that consciousness must be a functional phenomenon.
Bishop's version of the FQA considers a different transition, that of a robot or simulated entity transitioning from full dynamic implementation of an algorithm to one scripted to proceed deterministically and inevitably from one state to the next (just like a Putnam FSA in a rock). At each step in the transition, we simply replace a single conditional branch with a hardcoded state change, so that the transition is perfectly smooth. Again, the behaviour of the algorithm run on the same input is unchanged, and again we are apparently left with the absurd conclusion that the half-way point has a being that is half conscious, half perceiving qualia and so on.
For what it's worth, I don't think this version of the FQA is quite as absurd as Chalmers' original. I can imagine the half way point might consist of an entity that flickers between consciousness and zombiehood as it alternates between executing conditional and hardcoded state transitions. It's still strange, but not quite as weird as imagining what it would be like to half-perceive qualia while being fully functionally aware of them.
Chalmers (and also Chrisley [10]) points out that this need not be so surprising. Deleting these conditionals will inevitably correspond to a physical change in the system, and this difference might make the difference in considering whether consciousness is brought forth.
To this point, Bishop answers that we can instead imagine leaving all the conditional statements in place and instead simply delete (or replace with null operations) the code that will not execute. Now, Bishop argues, the code that actually executes is the same and so we can no longer appeal to a physical change in the system to explain why consciousness might fade.
I'm not sure this argument succeeds for a couple of reasons. Firstly, I don't think it is possible to delete code without having a physical difference, and as long as there is a physical difference in the system then it is possible to point to that as accounting for the difference between conscious and unconscious systems. Secondly, it's not clear to me that a system with conditionals but with deleted code really corresponds very well to a Putnam style FSA which has no conditionals at all. On the other hand, however, it's hard to credit that the physical presence of code never executed is crucial for consciousness.
All in all, I'm left with the impression that this whole line of argument is inconclusive. Despite the loose ends, my sympathy actually lies with Bishop but I think reasonable people might with some justification disagree on whether he has proven his case.
However, Chalmers also presents a related argument which to the best of my knowledge Bishop has not yet addressed. This is the argument that the FSA is just one model of a computation and not necessarily the best one for our purposes. That is, to implement an FSA may not really be all that is required to perform a genuine computation (despite assumptions to the contrary apparently dating back to Turing). To justify this, Chalmers appeals to that difference I noted earlier between the abstract FSA and how computer programs are implemented in practice -- that is that the state of an actual computer has fine-grained content, typically divided into variables and stack pointers and the like, and that there exist meaningful, lawful and causal relations between these sub-states. This is not the case for an FSA where state is associated only with a label and transitions to other states. Chalmers points out that it is possible to build an abstract model of computation that respects not only state transitions but also the content of particular states, and labels this model the Combinatorial State Automaton (CSA). In contrast with FSAs, it is not clear at this time that it is possible to build a mapping between natural physical systems and arbitrary CSAs. If it is not possible, then the computationalist can claim that what it is to be a conscious entity is to implement the right kind of CSA, and that brains and the right kind of AI might do so while rocks would not.
Chrisley [9] has made similar points, arguing that Putnam's account of causality is too weak, and that unlike a real computer, the state transitions in his projected FSAs are not strongly causal, that is, the physical system being in logical state A does not really cause its transition into logical state B. Again, the gist seems to be that simply implementing (or being interpretable as) an FSA is not enough to be a genuine computation.
I think these kinds of argument are plausible, but, as noted, it does depend on the impossibility of mapping natural systems to arbitrary CSAs or better accounting for causality in such mappings. In my quote from Searle, he doesn't assume an FSA mapping, and he doesn't assume that counterfactuals are ignored. He only assumes (albeit without justification) that there is some mapping between the state of the system and the operation of an algorithm. It's entirely possible that he is right, no matter what model of computation we adopt or how much stock we place in the mapping of counterfactuals. Searle and his followers take it for granted that his assumption is correct and most computationalists would seem to assume that it is false. Again, my sympathies on this one probably lie with Searle, but it's far from a knock-down argument against computationalism.
Conclusion
On balance, I think Putnam, Searle and Bishop have a point. I think there is a problem with computationalism as usually conceived. And yet I still call myself a computationalist! There is a way to accept all these arguments and reconcile them with computationalism without (quite!) accepting the existence of pixies in rocks and walls. I'll explain that on my next post, whenever I get time to write it!
[That post is now here]
References
- Putnam, Hilary (1987). Representation and Reality. MIT Press.
- Searle, John R. (1992). The Rediscovery of the Mind. MIT Press.
- Bishop, John Mark (2003). Dancing with pixies: Strong artificial intelligence and panpsychism. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press
- Bishop, John Mark (2002). Counterfactuals cannot count: A rejoinder to David Chalmers. Consciousness and Cognition 11 (4):642-52.
- Bishop, John Mark (2009). Why computers can't feel pain. Minds and Machines 19 (4):507-516.
- Bishop, John Mark (2009). A Cognitive Computation Fallacy? Cognition, Computations and Panpsychism. Cognitive Computation 1 (3): 221-33
- Hofstadter, D.R. & Dennett, D.C. (eds.) (1981). The Mind's I: Fantasies and Reflections on Self and Soul. New York, Basic Books (Chapter 22).
- Chalmers, David J. (1996). Does a rock implement every finite-state automaton? Synthese 108 (3):309-33.
- Chrisley R. Why everything doesn’t realize every computation. Minds Mach. 1995; 4:403–20.
- Chrisley R. Counterfactual computational vehicles of consciousness. Toward a science of consciousness. April 4–8 2006. Tucson Convention Center, Tucson, AZ, USA; 2006.
Hi DM,
ReplyDeleteI think this is an excellent explanation of Putnam’s, Bishop’s, and Chalmers’s views. A far easier read than most of the source material.
I'm honored that you highlighted my views, but I think presenting them as a series of isolated and exclusive responses isn't quite fair. (Although to be honest, I would have struggled to articulate my overall position until the later stages of our discussion, which allowed me to sharpen my thinking on it.)
Just to clarify:
1. From what I can see, the pixies can only come into existence through the existence of an enormously complex description. That description is far more complex than the relatively trivial ones we apply to engineered computing platforms, such as a binary 1=true and 0=false, etc. The description (we called it an “interpretation” in our discussion) is so complex that it would, in all practicality, require its own computing platform.
In essence, the state machine we're interpreting to exist in the rock isn't implementing the pixies (or Wordstar or whatever). It’s the states of the rock plus the implementation of the description that actually implements the pixies. So, the pixies don't come into existence until the description is developed. That's why I said in our discussion that the description amounts to an implementation of an AI that we then blame on the rock, wall, or whatever physical object is under discussion.
2. I’m puzzled by the type coupling you insist must exist between computationalism and the objective existence of a mind. I fully understand many people might find disturbing the idea of that existence being open to same level of interpretation that a putative implementation of Wordstar, but it seems like an inherent and inescapable aspect of the computational theory of mind, indeed of any physical / functionalist understanding of the mind that obeys the normal laws of physics.
(Of course, if the mind obeys its own special laws of physics, per Penrose et al, then you get out of it. But there is currently no evidence for it.)
3. Given 1 and 2, I have no trouble accepting the existence of the pixies, although it’s probably more accurate to say I’m accepting their potential existence. I don’t consider what I’m accepting to be particularly meaningful or troubling.
I do agree that, once we accept the interpretations / descriptions as meaningful, the challenges that Chalmers and others make, fail.
All that said, it’s worth noting again that my computationalism is pragmatic and completely dependent on its usefulness in interpreting neuroscience and psychology. Bizarre unfalsifiable consequences of that theory are interesting, but I’m not inclined to abandon it until it loses its pragmatic usefulness, at which point I would drop it like a hot rock.
Hi Mike,
DeleteNo need to be honoured! It's not like I'm some world-renowned intellectual or anything. I'm a nobody, some might say a crank.
> but I think presenting them as a series of isolated and exclusive responses isn't quite fair.
I'm sorry you think I haven't been fair. But if I've misrepresented your views in any way, I'm not really seeing it. I guess accepting and rejecting the pixies seem like contradictory positions (and indeed they are) but I didn't mean to suggest that you are committed to either of them. As long as you are non-committal, then I don't see the harm in entertaining either horn of that dilemma.
> 1. From what I can see, the pixies can only come into existence through the existence of an enormously complex description
That's not quite right (as I've tried to suggest on your blog). The possibility of such a description implies the pixies exist. The description does not have to be realised physically. So it's hard to see how it can do any work in bringing the pixies into existence.
Most computationalists would say that a computer is performing a computation because it is useful or natural to interpret it as doing so. But even if a computation can bring a consciousness into existence, most computationalists would not be happy with any suggestion that that consciousness only exists if there is somebody interpreting the computer as performing a computation. So it is just the possibility of interpreting it so that implies a consciousness exists. But, per Putnam, it is possible to interpret any system so. The only difference between a computer and a rock is how natural that interpretation is, and that is a subjective matter.
> I’m puzzled by the type coupling you insist must exist between computationalism and the objective existence of a mind.
Do you mean tight coupling? I'm a little confused.
> I fully understand many people might find disturbing the idea of that existence being open to same level of interpretation that a putative implementation of Wordstar, but it seems like an inherent and inescapable aspect of the computational theory of mind
If the existence of minds is open to interpretation, and this is an inescapable aspect of the computational theory of mind, then that would seem to disprove the computational theory of mind, because it is hard for me to make sense of any claim that the existence of my mind (for instance) is open to interpretation. I'm pretty sure I exist, in some sense anyway (i.e. perhaps not physically). Furthermore, I'm pretty sure I fully exist. I am not existing at a level of 97.5%, for instance.
> I do agree that, once we accept the interpretations / descriptions as meaningful, the challenges that Chalmers and others make, fail.
That's not quite my view, so I don't know that you're agreeing with me. These objections seek to show that these interpretations are *not* meaningful or legitimate. The objections of Chalmers, Chrisley and Hofstadter are among the most plausible I have seen from physicalist computationalists. I'm not sure they fail, I just don't see that they succeed. I'm on the fence, leaning towards Bishop.
> All that said, it’s worth noting again that my computationalism is pragmatic and completely dependent on its usefulness in interpreting neuroscience and psychology.
It could retain its pragmatic usefulness even if Bishop is right. The question is not whether a computational view is a useful model of human psychology but whether it captures *every* aspect of human psychology (i.e. phenomenal cosnciousness). Even if it fails in that respect it could still be useful for everything else.
Hi DM,
Delete"As long as you are non-committal, then I don't see the harm in entertaining either horn of that dilemma."
I guess what I'm saying is I reject the dichotomy you just posited. I accept their potential existence, but as a conception far less problematic than Bishop's. If you're going to view this as an excluded middle sort of thing, then I guess I do reject them strictly as posited by Bishop.
“The possibility of such a description implies the pixies exist.”
Why? The possibility of every computer program exists, but we have both have made a living because it is necessary to actually bring them into existence. What is different about pixies?
“Do you mean tight coupling?”
I did. Sorry, no idea why I typed “type”.
“If the existence of minds is open to interpretation,”
This may be a difference in outlook between us. I think that, ultimately, much of everything above fermions and bosons could be considered an interpretation, patterns that we assign labels and functional roles to, and it’s very possible that even the bosons and fermions may be. We consider macroscopic patterns like trees and rocks to exist as trees and rocks because that consideration is useful, similar to our classification of intergalactic superclusters (which some astronomers insist don't actually exist).
“ I'm pretty sure I exist, in some sense anyway (i.e. perhaps not physically). Furthermore, I'm pretty sure I fully exist. I am not existing at a level of 97.5%, for instance.”
I would say that you are a definite system, a pattern that holds a sub-pattern, a state, data, that we interpret as a conclusion of the overall pattern as existing. Despite being arbitrary in the overall scheme of things, that interpretation is evolutionarily useful, so it’s one that has been promulgated and shared.
“These objections seek to show that these interpretations are *not* meaningful or legitimate. “
“Meaningful” and “legitimate” leave some room for interpretation. My understanding was that they were arguing that the interpretations weren’t logically valid. (But I'll freely admit I might have misunderstood.) I think, strictly speaking, that the interpretations are logically valid, but given the absurd premises required, also largely irrelevant except perhaps as confirmation that there may always be potential cases in nature that challenge our categorizations.
“The question is not whether a computational view is a useful model of human psychology but whether it captures *every* aspect of human psychology (i.e. phenomenal consciousness). Even if it fails in that respect it could still be useful for everything else.”
The problem with these kinds of metaphysical assertions is, how do we ever determine whether they are true or false? No matter how much success we have with neuroscience, AIs, or even mind uploading, it will always be possible for someone to say that we don’t understand *every* aspect of human psychology, just enough to accomplish what we’re accomplishing.
Hi Mike,
ReplyDelete> The possibility of every computer program exists, but we have both have made a living because it is necessary to actually bring them into existence.
What we do for a living is rearrange electrons so that a particular desired computation is realised according to a pre-existing interpretation. The idea with pixies is not that the possibility of that computation exists (we're not talking about Platonic existence) but that that computation concretely exists according to a possible (not necessarily realised) interpretation.
There is nothing in physics to say that the value in a bit register is a one or a zero. We only agree by convention that it is a one if the voltage is such and such and a zero otherwise. The same goes for everything that happens in a computation. It's all only by convention that it can be viewed as a computation at all. It is useful to regard it as a convention because that convention is pretty sensible and straightforward. But one can imagine conventions that are slightly less straightforward. It's a one if the voltage is such and such as long as today is not Wednesday. And so on. With a little effort, it is perfectly possible to make use of systems with more and more awkward conventions, even if all you're doing is manually inspecting the internal state and not relying on convenient input/output. Eventually you get to conventions as absurd as Putnam's mappings. And what point in this continuum does consciousness disappear and why?
The fact that we are interpreting a system as a computation can have no bearing on whether it is conscious. My consciousness does not depend on you believing that I am conscious. The same would go for a pixie in a rock. The interpretation doesn't have to be realised in order for the pixie to exist, at least if the argument is sound. Chalmers in particular has offered good reasons to question this, and if you want to defend computationalism I would urge to to go down that road rather the one you have been following which seems to me to be a dead end.
> I would say that you are a definite system, a pattern that holds a sub-pattern
I think this makes sense from a third person perspective, a consideration of me as a physical object. But I don't think it works when asking if this physical object hosts a mind or not. Again, from my perspective (and remember, the question is whether I even have a perspective), my existence (according to some notion of existence anyway) is not really subject to doubt.
If you really want to be blasé about whether people really have minds or not then it's very hard to say why you should care about them. But if you do think people have minds, and you don't think rocks have minds in the same way, then you need to be able to say why people have minds and rocks do not. Pragmatic considerations (you can't interact with pixies) don't really do enough to combat the absurdity as I argue in the post.
> My understanding was that they were arguing that the interpretations weren’t logically valid.
To say something isn't logically valid is usually to say that there's some syllogism where the premises don't entail the conclusion. "All men have two legs; Donald the duck has two legs; Donald the duck is a man". I don't see anything like that. Rather, what Chalmers in particular is saying is to concede the point that you can find a mapping to show that rocks are implementing particular runs of FSAs but to argue for various reasons that this is insufficient to show that they are really performing computations.
> The problem with these kinds of metaphysical assertions is, how do we ever determine whether they are true or false?
We can't! This is why this is philosophy rather than science.
Hi DM,
Delete“And what point in this continuum does consciousness disappear and why?”
I think a better question to ask is, at what point does consciousness appear? I don’t think the rock, just sitting there on its own, is conscious. Consciousness would only exist when the interpretation adds enough of a framework to where the rock and interpretation together make up a conscious system. Remember that you’d need a computing platform to even formulate the interpretation (or a being with enough excess computing capacity to hold it).
“The fact that we are interpreting a system as a computation can have no bearing on whether it is conscious.”
When the interpretation is effectively completing the implementation, I think it can. Of course, calling it an “interpretation” at that point is perverse, which is why I see these interpretations as absurd, an abuse of what “interpretation” is commonly understood to mean.
Can I point to the exact line when an “interpretation” becomes part of the implementation? No. And as I said before, I fear any such line would be arbitrary. It’s like trying to say when a zygote / embryo / fetus / neonate becomes conscious. Or at what level of sophistication animals become conscious. Any answer is likely to just be an arbitrary line that someone decides to draw.
I did bat around the idea that maybe an interpretation that requires its own state machine is an objective standard. Then I remembered that we are ourselves state machines and realized it was probably hopeless.
“My consciousness does not depend on you believing that I am conscious. The same would go for a pixie in a rock.”
For the reasons I note above, I don’t think the cases are the same. You are a complete implementation of a mind. A pixie isn’t until the absurd interpretation is in place. If you suffered brain damage, your implementation would become incomplete. Suffer too much damage and, even if your body continues, consciousness might be gone in any meaningful way.
Not the same thing as an interpretation? Suppose I apply an aggressive enough “interpretation” to someone in a vegetative state. With enough of such an interpretation, I could develop a new conscious being. In reality, all I’ve done is implemented an AI with an implementation tied to a damaged organic brain, and pretended like it was a person. I’ve provided a substitute for their original “interpretation” that was destroyed from brain damage and replaced it with a technological one.
“I think this makes sense from a third person perspective, a consideration of me as a physical object.”
I think this gets to the “hard problem” divide. We can understand the system objectively, but that objective understanding will never add up to the first person experience. No matter how much we understand a bat, the information processing of its brain, etc, it will never bring us to the experience of being the bat.
“If you really want to be blasé about whether people really have minds or not then it's very hard to say why you should care about them.”
I’m not blasé about people’s minds and I don’t think anything I’ve said obliges me to be. But I will admit that I care more about beings I can observe than hypothetical ones whose existence I can’t observe.
Maybe another way to look at this interpretation issue is to ask where the boundary is between the mind and its environment. A while back when that survey on what philosophers believe came out, I was puzzled that 51% of philosophers were externalists when it came to mental content, that the mind arose not just from the human nervous system, but through its interactions with the environment. When I read about the internalist / externalist debate, I came away thinking that in reality, the boundary was far more blurry than anyone was comfortable admitting. I see this interpretation issue as just another way of stumbling over the same thing.
Hi Mike,
Delete> Consciousness would only exist when the interpretation adds enough of a framework
But I'm not talking about building an interpretation. The interpretation is in the mind of the beholder. Computers are interpreted as performing computation by a (relatively) simple interpretation. Rocks are interpreted as performing a computation by a complex interpretation. But the interpretation is not part of the system in either case. So what is it that allows a computer to realise a consciousness but not a rock?
> You are a complete implementation of a mind. A pixie isn’t until the absurd interpretation is in place.
But how? If you say my brain is performing a computation, you are interpreting it to be doing so. You don't have objective criteria for determining that it really is performing a computation. Without those objective criteria, physicalist computationalism just doesn't work.
Everyone on both sides of the divide seems to agree that this is the case. Which is why academic computationalist philosophers have been focusing on trying to find those objective criteria.
Hi DM,
Delete"The interpretation is in the mind of the beholder."
The point I've tried to make is that, in the case of the pixies, that isn't true. Their interpretation is too complex to be in anyone's mind (at least any human's). It requires its own implementation. Saying that implementation isn't part of the pixie system is just drawing an arbitrary line around the rock, giving it the benefits of the implemented interpretation, and declaring that it does everything the rock+interpretation does. If I tried that in accounting for a business, I would stand a good chance of being indicted for fraud.
"If you say my brain is performing a computation, you are interpreting it to be doing so. "
I'm interpreting the pattern that is you to be performing a certain function. As you pointed out to me a while back, the difference between a double slit experiment and a quantum processor is a matter of interpretation. This is an issue with any interpretation of any system performing any function.
The only way I can see that you get out of it for mental functionality is to fall back to some form of substance dualism or posit the mind following its own special laws of physics. The problem is that there is no evidence for either, and in the case of substance dualism, there is contradictory evidence. Consciousness does not appear to be a fundamental aspect of the objective universe, no matter how much we might want it to be.
Hi Mike,
Delete> Their interpretation is too complex to be in anyone's mind (at least any human's). It requires its own implementation.
> Saying that implementation isn't part of the pixie system is just drawing an arbitrary line around the rock
I'm sorry, I don't think you're really getting the argument. You should probably be suspicious that this is the case, because it does seem to be the consensus that computationalism depends on being able to objectively say whether a computation is taking place, meaning that we need objective criteria.
The reason I say you're not getting it is that you keep insisting that it's the implementation doing the work, despite the fact that the implementation doesn't have to be physically instantiated.
A computer implementation of conscious algorithm X is supposedly conscious whether or not we interpret it to be implementing conscious algorithm X. Now suppose somebody tinkers with the computer so that it's only slightly harder (but still possible, once you have the right key or insight) to interpret it as implementing conscious algorithm X. Is it still producing consciousness even if there is no observer performing the interpretation? I would say it must be.
But we can keep making it harder and harder to see that it is implementing the computation. Call it some kind of code obfuscation if you like. At each step, it's just slightly harder to interpret, but even if nobody is actually interpreting it a computationalist should expect it to be producing the same consciousness.
Ultimately, we end up with a system that is practically impossible to interpret as interpreting the computation. We would need an interpretation so ridiculously complex that we basically have a Putnam mapping. Is it conscious?
So the point here, and the point you don't seem to grasp, is that we're not talking about building difficult and complex systems for interpreting the computation. There is no interpretation. We're just asking whether the *uninterpreted* computation is itself conscious.
Hi DM,
DeleteMaybe I don’t understand the argument.
Or maybe I just don’t buy the extreme interpretations / descriptions / mappings. Why exactly? I think they’re invalid because I perceive that they are completing the implementation. OTOH, I can buy that the potential existence of an interpretation that does not participate in the implementation can mean that a system is conscious.
Now, it’s a fair question to ask what it means to say that a mapping completes the implementation. I suspect trying to answer this question with consciousness is a lost cause because we simply don’t know enough yet. We don’t know what is required to implement consciousness. (We have ideas, but they’re all controversial.)
But what about Wordstar? What does it mean to say an object is implementing Wordstar? I think it means that somewhere in the object there is a pattern that we’d be able to draw a logical border around and say, “Here is Wordstar.” To be credible, I think this delineated pattern would have to accept the same inputs Wordstar can, and produce the same outputs. These inputs may come from other parts of the object and the outputs may go to other parts of the object, but they’d still be I/O similar to that of a conventional Wordstar implementation. Otherwise I think calling the embedded pattern Wordstar is meaningless.
If an interpretation of this system is not actually participating in the logic and information flow of that system, then I think that is an interpretation whose mere potential existence is significant in the way you suggest. But if the interpretation has to bring in data or logic to make the implementation real, then it has become part of that implementation, and I think it’s wrong to say the implementation exists just because of the potential existence of the interpretation. Indeed, I don’t think the label “interpretation” remains valid for what it has become.
Is there an objective standard for when an interpretation has crossed the line into implementation? I don’t know. I fear even that point is itself subject to interpretation, that every interpretation brings in some logic and data. Still, if its logic and data from the workings of the conventional implementation, maybe we should be suspicious.
Hi Mike,
DeleteSorry if I seemed a little snotty before when I said you didn't get it. It just feels like we're making the same points again and again. Subjectively, it feels like I've explained why your position doesn't work a few times now without a credible defence from you. But that's obviously just how it feels from this side of the debate. I recognise it may feel the same from yours.
> I think they’re invalid because I perceive that they are completing the implementation.
Yeah, but, again, they don't have to exist.
Your position, that the interpretations complete the implementation, seems reasonable on the face of it, but it seems to me to demand that a computation be interpreted as a computation before it can realise consciousness. So a computer with no inputs and no outputs and nobody looking at its internal state would not be capable of realising a conscious mind. That doesn't seem right, does it? To think otherwise, you'd have to explain why your interpretation of the computer as performing a computation is not completing the implementation, but Putnam interpreting a rock as performing a computation is completing the implementation. What are the criteria to determine which is the case for an arbitrary system?
Just follow the logic from my last comment. You can keep gradually obfuscating interpretations until you get from a system like a computer to a system like a rock. Computationalists would say that our interpretation of a computer as a computer does not influence whether it realises consciousness, so the fact that the interpretations are not realised shouldn't matter, neither should it matter that they are complex (because they are not realised). At some point, the system fails to realise a conscious mind. When and how and why?
> We don’t know what is required to implement consciousness.
I think this is a cop out because if you really are a computationalist, then you believe that all that is required is for the right computation to be implemented. The only thing we don't know is what the properties of this computation need to be. But that is irrelevant to Bishop's argument, which is that we can never really say when any computation is implemented (so it's not really specific to consciousness). So we don't need to learn anything about consciousness per se to address this argument. Rather, we need objective criteria for defining when a computation is implemented. If those cannot be found, then computationalism is not viable. Now, you can say "we don't know yet", and that's OK, but from my perspective you at least need to concede that (physicalist) computationalism depends on those criteria being there, whatever they are.
> These inputs may come from other parts of the object and the outputs may go to other parts of the object, but they’d still be I/O similar to that of a conventional Wordstar implementation.
This is all possible with a Putnam mapping.
> But if the interpretation has to bring in data or logic to make the implementation real
It doesn't. The interpretation just allows us to see that it is implementing WordStar. It is implementing WordStar regardless. That is, if to implement a computation is just to be mappable as an FSA, which Chalmers would dispute.
> Is there an objective standard for when an interpretation has crossed the line into implementation?
There would have to be for physicalist computationalism to be viable.
Hi DM,
DeleteI agree that it does seem like we're circling around the same points. We've each rebutted each other's points repeatedly, but are unconvinced by the other's rebuttals. Rather than look for new ways to restate my own rebuttals, I think I'll just take a break, and see what you have to say in your next post.
In truth, I continued this far longer than I normally do because I found it interesting. It does seem to get into the question of where the boundary lies between a mind and its environment and to what extent we can consider it apart from that environment. But that question may eventually need its own separate thread.
Hi DM,
DeleteI know I said I was taking a break, but just thought I should tell you this. You were right that I was mistaken about the challenges of Chalmers and others. Revisiting the Stanford article ( http://plato.stanford.edu/entries/computation-physicalsystems/ ), I do think their objections have merit. Indeed, I see the causality / disppositional one as compatible with and supporting my own objection to the mappings.
(There are other objections related to semantic and syntactic accounts discussed in that article that I'm far less sure about, although I could see the semantic account possibly having implications more narrowly for consciousness itself.)
I was initially cold to what's called the mechanistic account in the Stanford article, but the more I think about it, the more merit it might have. It seems to go in the same broad direction as the insufficiency of FSA argument you discussed. It does seem like the time, energy, and resources we put into creating computing devices should make us want to account for the differences between those devices and rocks, walls, etc.
Anyway, just wanted to let you know I realize I was wrong about the other arguments. My bad. Although perhaps less welcome is that they fortify my conviction that the pixies, and unlimited pancomputationalism in general, aren’t a problem.
Hi Mike,
Delete> Anyway, just wanted to let you know I realize I was wrong about the other arguments.
One of the things I appreciate most about you is your open-mindedness and willingness to reconsider. A rare virtue in online arguments!
> Although perhaps less welcome is that they fortify my conviction that the pixies, and unlimited pancomputationalism in general, aren’t a problem.
No, that's fair enough. I think those arguments are quite defensible so it's OK if you don't think pixies are not a problem. It would be interesting to see Bishop take them on again, as I'm not sure how well his earlier published attempts succeed.
Thanks DM. I appreciate you generating this conversation. I have to admit that, until a few months ago, I had no idea that the definition of computation was this complicated. Always more to learn.
DeleteStill looking forward to your next post!
Hi DM. Thanks for drawing my attention to this post.
ReplyDeleteSearle's argument seems to rely on his belief that we can put any computational interpretation we like on any state of a system. As you know, I've explained at some length why this isn't so, in the appendices of my response to Searle's argument from syntax and semantics (https://barbedsextant.wordpress.com/2015/10/14/searles-argument-from-syntax-and-semantics/).
Your own argument depends on your claim that "any computation can be described as an FSA". It's not clear what you mean by that. The question we should be considering is whether every algorithm has an equivalent FSA. The computationalist claim which is being challenged is that any system that implements the right algorithm will be conscious. In challenging that claim, it does no good to establish some result about all FSAs unless you can show that all algorithms (or at least all the algorithms that might be claimed to be sufficient for consciousness) are equivalent to FSAs. And by "equivalent" I mean that they are the same algorithm in every way that matters, not that they produce the same results. You seem to focus on whether two systems produce the same output, or go through the same states. That's not the right question.
Let me illustrate by means of an example. Let device A be a simple computer with a single ordered memory space, which we'll call RAM. The processor has no memory, and any data which would normally be stored in a register (such as the program counter) is stored in RAM instead. RAM consists of N binary flip-flops, so we can characterise the state of RAM (and therefore the state of the computer) by a sequence of N binary digits, which we can also interpret as a number (m) from 0 to (2^N)-1. There are potentially 2^N states that the computer can be in. (For a particular program the number may be less, since some states may be unreachable.) We will also assume that the computer has no further inputs once the RAM has been set to an initial state and execution has begun. Let's say the computer has been programmed to play both sides in a game of chess, i.e. to play chess against itself. If the computer is run repeatedly, starting each time from the same initial state, it will repeatedly play exactly the same game (or sequence of games). If we wanted it to play different games, we could include a random number seed in the program, and vary that seed with each run, but that would mean that the system was starting in a slightly different state each time. Let A be initialised with one such program. Now I ask you, how can we construct an FSA that would be equivalent to this program?
DM: An FSA model would just list all possible states by their labels and what successor states and output they produce. This ought to be enough to capture anything a computer could ever do.
It seems you think the following type of FSA would do. We could have an FSA with 2^N states, one for each potential state of device A. And for each of these states we could specify the state that comes next during program execution. According to Putnam, any FSA can be realized by any open physical system. But I'm going to describe a device (B) that is uncontroversially a realization of the FSA, and which can be conveniently compared with A. To construct B we take A and replace its processor with a simple integrated circuit and a read-only-memory holding a list with 2^N entries. Each entry has N digits, corresponding to a state of A. Entry number m contains A's state at step m. So the first entry in the table is A's initial state. The second entry is A's state after executing one instruction. And so on. At each step, the integrated circuit uses the current state number (m) as an index into the list, reads off the next state, and puts the RAM into that state.
<...continued>
ReplyDeleteB's RAM goes through exactly the same series of states as A's RAM (at the level of binary flip-flops). But are they realizing the same algorithm? No, because they're using a different algorithm for deciding which state comes next. B's integrated circuit has to read the entirety of RAM at each step. A's processor just has to read the program counter, the corresponding instruction, and maybe a few variables. Both devices have a series of chess positions represented in RAM over the course of a run, collectively representing the progress of a chess game. But A is doing the work of deciding what moves to make, while B is just reading off a pre-calculated list of chess positions (plus other stuff). We would need to run A (or something like it) to produce B's look-up list.
More generally, looking something up in a table is not the same algorithm as calculating it, even if both get you to the same state. But sometimes we may not care, because we're only interested in a higher level of abstraction. Suppose you write a computer program that calls a library function to calculate sine values. After running the program for a while you install a new function library which uses a different algorithm for calculating sine (perhaps even looking it up in a table), but which still gives the same results. Are you running a different program now? A different algorithm? At a low enough level of abstraction, you are. But for most purposes we wouldn't care, and we would normally say we're still running the same program, and probably that we're still using the same algorithm. We need to consider which level of abstraction is relevant in a particular context.
DM: As such, if any algorithm can pass the Turing Test and do all the information processing tasks a human brain can do, then so could an FSA.
An FSA (like Ned Block's "Block Head") could in principle pass any Turing test. In practice it wouldn't be able to pass a strong enough test, because of practical limits on the size of the look-up table. In any case, it wouldn't be using the same algorithm as an AI program (or full-brain simulation), so the fact that the algorithm of Block Head is insufficient for consciousness does not refute the claim that those other algorithms are insufficient for consciousness.
Oops. Those last 3 words should have been "sufficient for consciousness".
ReplyDeleteP.S. I may have been misusing the term FSA. I've been using it to refer to an algorithm or model of a computation which employs a look-up table (or equivalent) to decide which state to move to. But now I think the term usually refers to a model that omits any information about how the system decides which state to move to. In that case, it may be correct to say that any computation can be described by an FSA. But such a description may omit crucial information about the computation, as in my example. The computations of devices A and B can both be described by the same FSA, but they are very different computations. And a system that "realizes the FSA" need only realize one of those computations. Therefore we could accept Putnam's conclusion that any open system realizes every possible FSA, without accepting that it realizes every possible computation.
ReplyDeleteI've just realized I made a mistake in my example. I wrote (about the look-up table of device B):
ReplyDelete> Entry number m contains A's state at step m. So the first entry in the table is A's initial state. The second entry is A's state after executing one instruction. And so on. At each step, the integrated circuit uses the current state number (m) as an index into the list, reads off the next state, and puts the RAM into that state.
The last sentence was correct, but the previous sentences are inconsistent with it. I should have written:
> Think of the list as a one-dimensional array, L. For i=0 to 2^(N-1), L[i] contains the state that A would go into next if its current state is i. B's RAM is initially in the same state as A's. Thereafter, at each step, the integrated circuit reads the current state of RAM, m, looks up L[m], and copies that number into RAM.
Note that in this example there's no need to store any labels (as you suggested), because the contents of RAM can themselves be treated as an ID number, which can be used directly as an index into the list. However, as an alternative, each entry in the list could be a pair: (current state number, next state number). In that case the entries could be stored in any order, and the integrated circuit would have to search the list to find the entry with the current state number.
I hope that's clearer now.
Hi Richard,
DeleteThanks for stopping by!
> Your own argument depends on your claim that "any computation can be described as an FSA".
Firstly, I just want to point out that this is not my argument and I'm on the fence about whether it works. I'm just trying to explain an existing argument.
But as to the substance of your comments, it seems to me that this is more or less the point David Chalmers is making. You're both saying that simply implementing an FSA is not enough to really instantiate a computation.
The example you give of two ways of implementing a system playing chess shows that you do have a good grasp of the issues, but Putnam, Searle and Bishop would say that your system B is implementing the same computation even though you think it is not, because to them to implement a computation is just to implement an FSA. You correctly point out that these systems seem to employ different algorithms, but they would say there is no algorithm in a physical system, only particles bumping against particles. The FSA model is (to them) how we map from the physical to the logical.
In order to really tell what systems are implementing what algorithms in the way you need, we need to come up with some objective criteria, and that's not so easy. Chalmers and others have made some suggestions, with his CSA model and talk of counterfactuals. I think these are a good start.
However, ultimately, I think Searle's broad point (if not Putnam's specific argument) is likely to be correct. I don't actually think there is a fact of the matter about what computations a system is implementing. However you define criteria to derive algorithms from the operation of physical systems, as long as you don't include arbitrary cutoffs/thresholds, it is likely (it seems to me) that there exists some way to interpret any physical system as interpreting any algorithm.
Hi DM,
Delete> Firstly, I just want to point out that this is not my argument and I'm on the fence about whether it works. I'm just trying to explain an existing argument.
OK. But you seem to be leaning towards agreeing with it, and I'd like to persuade you to come off the fence and reject it. So I'll continue to direct my responses to you.
> But as to the substance of your comments, it seems to me that this is more or less the point David Chalmers is making. You're both saying that simply implementing an FSA is not enough to really instantiate a computation.
To put it more carefully, I'm saying that there's a many-to-one mapping of computations to FSAs, so when you instantiate an FSA, there is more that one possible computation that you could be instantiating.
> You correctly point out that these systems seem to employ different algorithms, but they would say there is no algorithm in a physical system, only particles bumping against particles.
So how do they/you explain the fact that the systems seem to employ different algorithms (or perform different computations)? What grounds do you have for denying what seems obvious?
Surely you will agree that in ordinary speech it makes sense to say of a computer that it's running one program/algorithm/computation and not another. It makes sense for me to say that my PC is running Windows and not Linux. This isn't arbitrary. It's not just a matter of opinion. Even if Putnam were right that there's some alternative way of looking at my PC in which we could say it's running Linux, there's a good reason why we normally speak about computers the way we do, and why we are right to say that my PC is running Windows and not Linux. Well, in my example I'm speaking in that ordinary way. Speaking that way, we should say that systems A and B are performing different computations.
Hi Richard,
DeleteI'm probably leaning towards rejecting the Putnam argument which is specific to FSAs, but I'm probably leaning towards accepting the broader idea that what computation is happening is subjective.
> So how do they/you explain the fact that the systems seem to employ different algorithms (or perform different computations)?
One cloud may seem to look like a face, and another may seem to look like a car. Some patterns are more easily projected onto physical objects than others, but different observers may disagree and we should not insist there is a fact of the matter regarding which interpretation is correct.
> It makes sense for me to say that my PC is running Windows and not Linux.
Yes, according to a predetermined interpretation or mapping that drives all our input/output. In ordinary speech, this is assumed and does not need to be spelled out. But in this case it is an important point.
So, computers are easy to interpret because the interpretation is agreed in advance and because they take input and produce output in line with that interpretation.
But, presented with an alien artifact which has no input but electricity or output but heat and no architecture that you recognise, how can you tell objectively whether it is performing a computation which simulates a virtual world with conscious inhabitants or whether it is just a kind of electric heater?
Surely there are many ways to map its processes to computation. You may try paying attention to various different kinds of physical quantities and processes going on in its interior and eventually you may hit on an interesting mapping with relatively clear correspondence between physical states and logical states. But how could you ever know you had found "THE" mapping. Even taking something as simple as Bishop's example of the logic gate -- there is no way to go from observing its physical operation to identifying what logical function it is supposed to perform (although actually I have a partial rebuttal to that coming up in the next post.)
I don't think you can ever be sure there are not other mappings out there which would show the system to be implementing an entirely different algorithm. In particular I don't think you can find good principled reasons to rule out arbitrarily contrived mappings which would show it to be implementing whatever algorithm you like.
Hi DM,
DeleteThe position you're describing makes the assumption that a computation is fixed by a mapping of physical states onto "logical states". I reject that assumption. I say the computation depends on the specific causal processes going on in the system. (Searle accuses computationalists of ignoring causal powers, but on this subject he's the one who is ignoring them.)
I explained in my Appendix B why we can't interpret the states of a computational system however we like. For example, the op-code for a JUMP instruction cannot be interpreted as an ADD instruction, because it causes the processor to JUMP and not to ADD. JUMP and ADD are very different operations, and we cannot interpret one operation as the other.
That said, my explanation presupposed that there is an objective fact of the matter about what the processor is doing at a physical level. For example, it presupposed that there is an objective fact of the matter how about what parts of the processor constitute flip-flops, and about what physical state those flip-flops are in. Do you want to question that presupposition? Putting this in the context of Bishop's logic gate example, do you want to suggest that we are not limited to interpreting that system as an AND gate or an OR gate. We could equally well interpret it as an XOR gate, or a NOT gate, or as having the physical architecture of a whole Mac.
Hi Richard,
DeleteYou can see causal processes in Putnam mappings too, though. The physical states he's mapping are caused by earlier physical states, just as the logical states are caused by earlier logical states. Had the earlier physical states been different, then the later states would have been different too.
> it presupposed that there is an objective fact of the matter how about what parts of the processor constitute flip-flops, and about what physical state those flip-flops are in. Do you want to question that presupposition?
Yeah, I think we can question what constitute flip-flops and what logical state those flip flops are in (not so much the physical state).
We could potentially interpret Bishop's logic gate as any number of things if we allow ourselves a sufficiently arbitrary interpretation. For instance, we can contrive a mapping where it's a NOT gate for input A as long as the mapping of true/false to 1/0 can be different for inputs and outputs and can change depending on the voltages going into A and B.
It seems to me that anything is possible as long as we allow our interpretations to get bizarrely perversely complex. Searle said only that there is some pattern of movement that is isomorphic to the operation of WordStar. That seems right to me. This jiggle corresponds to that jump instruction. That collision corresponds to that bit flipping. If there are no limits on how ridiculous our interpretations get, then I don't see how he can be wrong.
So it would seem the physicalist computationalist must believe that there are such limits. But defining them clearly is not so easy, much less defending these criteria from first principles. Anything you come up with is probably going to look like an ad hoc solution to rule out walls and rule in computers.
> If there are no limits on how ridiculous our interpretations get, then I don't see how he can be wrong.
DeleteI can make no sense of this. Why do you think that ridiculous interpretations are relevant? I somehow doubt that Searle would agree that his Wordstar interpretation is ridiculous.
I'll get back to the other issues when we've settled this, because on the face of it it seems pointless to engage in a discussion where we take ridiculous interpretations seriously.
Hi Richard,
DeleteWell, I don't know that Searle would agree that his Wordstar interpretation is ridiculous, since he uses it in a serious argument. But he would agree that it might be arbitrary and complex. When I call such interpretations "ridiculous", I'm just saying they are extremely arbitrary and complex, the kind of interpretation you would find ridiculous. And Searle would agree with that.
I think such ridiculous interpretations are relevant because whether an interpretation is ridiculous or not is, it seems to me, entirely subjective, and a subjective judgement cannot be the basis of an objective fact.
A law of gravitation that says "Massive objects distort space" isn't good enough. Suppose Searle were to say that "according to some interpretations of 'massive', an atom is massive. Therefore every atom in the universe is distorting space". This is, as far as we know, quite right. It's not fair to dismiss this conclusion by saying that interpreting an atom as "massive" is ridiculous. It is and it isn't. It's certainly not big by normal standards, but it's not as if it's completely massless. There can't really be a law of nature that says only massive objects distort space until we have criteria to define what exactly we mean by "massive" or a way to relate the degree of massiveness to the degree of space distortion (which is what we have). Since it doesn't seem to be plausible to relate "degree of ridiculousness" to "degree of consciousness", we're left looking for clear, objective criteria instead.
So, a statement such as "This object A is reasonably interpreted as implementing psychogenic algorithm B therefore A has a mind" just doesn't work as far as I'm concerned, at least until we can specify exactly what we mean by "reasonably interpreted".
I'm all for attempts to rule out those ridiculous interpretations, but those attempts need to do so by finding objective criteria by which they can be ruled out -- this is what Chalmers and co are trying to do. Dismissing them out of incredulity isn't really good enough.
Hi again DM,
Delete> When I call such interpretations "ridiculous", I'm just saying they are extremely arbitrary and complex, the kind of interpretation you would find ridiculous.
I'm still having a hard time understanding this. I suspect that when you say "ridiculous" you mean "seemingly ridiculous". Anyway, my goal is to show you that some such interpretations are objectively wrong.
But I'll briefly digress from that goal, because I want to clarify my position and to point out something interesting Searle wrote. I've just been re-reading part of Searle's paper which you quoted from above. Searle himself describes a response to the argument you quoted:
"I think it is probably possible to block the result of universal realizability by tightening up our definition of computation. Certainly we ought to respect the fact that programmers and engineers regard it as a quirk of Turing's original definitions and not as a real feature of computation. Unpublished works by Brian Smith, Vinod Goel, and John Batali all suggest that a more realistic definition of computation will emphasize such features as the causal relations among program states, programmability and controllability of the mechanism, and situatedness in the real world."
This is broadly my response, where I emphasize the first item in that list: causal relations among program states. Your last comment mentioned Putnam's discussion of causes. My point is that it's not sufficient for state S[i] to be a cause of state S[i+1]. It has to cause it in the right sort of way. That was the point of my example of systems A and B. Both proceed through the same series of RAM states, and there is a causal dependency, since you wouldn't get state S[i+1] if the previous state hadn't been S[i]. But state S[i] isn't the sole cause of state S[i+1]. State S[i+1] is also a result of the physical causal nature of the processor or IC, which is relevantly different in each system. The causal processes are different in each system. That's why it looks like one is calculating the chess positions while the other is just reading them off the list. It looks that way because it is that way. I don't think you've given an alternative explanation of why it looks that way.
(Incidentally, it's only at the end of each instruction cycle that the two systems are in the same state, and even then not strictly the same, since the processor of system A is not the same as the IC of system B, though each of those chips returns to its own initial state by the end of each cycle. During the course of a cycle the two chips are going through very different states.)
Since Searle says that "it is probably possible to block" that argument in some such way, and doesn't attempt to address such responses further (unless he does it elsewhere), he seems in effect to have largely abandoned this argument. He goes on to make a different argument, about "syntax".
Returning to my previous line of thought...
DeleteI wrote:
>> it presupposed that there is an objective fact of the matter how about
>> what parts of the processor constitute flip-flops, and about what
>> physical state those flip-flops are in. Do you want to question that
>> presupposition?
You replied:
> Yeah, I think we can question what constitute flip-flops and what logical
state those flip flops are in (not so much the physical state).
But I was asking about the physical states. I don't think I made myself clear. Let me go down to the level of transistors. Do you think we can interpret anything we like as a transistor? Or would you agree that there's an objective fact of the matter about how many transistors a processor has, and how they're connected up? Do you agree that there's an objective account to be given about the physical states these transistors go through in operation, and about the causal processes involved? Or can we equally well give any account we like, and no account can be objectively wrong?
Sorry, DM, I've decided to not to comment any further. Much as I enjoy these discussions, I get a bit obsessed and end up spending far too much time on them, when I should be doing other things. I'll just sum up by saying that I think my view is a better explanation, one that doesn't require us to accept any apparent absurdities or appeal to Platonism to avoid those absurdities.
DeleteBest wishes,
Richard.
Hi Richard,
DeleteRegarding "why it looks that way", it's because certain interpretations are much more natural than others. It looks like it's reading it off a list not because this is objectively the case but because it is much easier and more natural to interpret it thus.
There is a fact of the matter regarding how many transistors the processor has and how they're connected up because these are physical objects and states of affairs. You can certainly make an interpretation of what is going on at the level of transistors and the signals between them. But there are many other ways to interpret the system, e.g. by looking at the movement of particular electrons and imposing an arbitrary post hoc interpretation on your observations.
Anyway, thanks for sharing your thoughts Richard.
Hi Mark,
ReplyDeleteThis was a good read, thanks for writing it. This is a topic that I've never really thought about much. I look forward to the followup post.
I have a hard time accepting the idea that there is nothing objective about what a computation is, and thus that a rock or a wall is doing every possible computation -- obviously I need to think about this.
I'm also finding myself not assenting with full-blown computationalism. I conceive of consciousness as some sort of process, but I can also conceive of an equivalent computation, say as emulated by a Turing machine, having a sufficiently different process as to not be conscious.
Unfortunately I can't say much more as I don't feel that I have any convincing account of consciousness to offer in place of computationalism. (And the above may just be defects of my intuition.) I've always considered it way too hard a topic!
Cheers, Coel.
Thanks for reading, Coel.
DeleteHi DM,
ReplyDeleteI'm trying to think through computationalism and whether I agree with it. Assuming that your previous blog posts make the case for it, can you point me at what you think are the best arguments for computationalism?
Cheers, Coel.
Hi Coel,
DeleteMost of my posts are probably more about defending it against certain criticisms than making a positive case for it.
But you could check out
http://disagreeableme.blogspot.co.uk/2012/12/strong-ai-story-so-far.html
http://disagreeableme.blogspot.co.uk/2012/12/strong-ai-evolutionary-parsimony.html
http://disagreeableme.blogspot.co.uk/2012/12/strong-ai-illusion-of-real-consciousness.html
http://disagreeableme.blogspot.co.uk/2013/01/strong-ai-naturalism-and-ai.html
And especially
https://scientiasalon.wordpress.com/2014/09/01/the-intuitional-problem-of-consciousness/
and
Hi DM,
ReplyDeleteI'm grossly late to the party, but I wanted to say that this is one of the best expositions of the problems of computationalism that I have yet come across---well written and accessible, while still sufficiently detailed so as not to weaken the arguments.
Regarding the counter-arguments, however, I think there's a strategy that can be used so as to make the arguments immune to (at least most of) them. The basic idea of Chalmers' counterfactuals, semantic or syntactic mappings, and the like, is to attack the 'simple mapping account' in one way or another---i.e. to rebuke the idea that all it takes for a physical process to instantiate (or implement) a computation is that there must exist a mapping between its states and the logical states of the computation.
But the argument against computationalism can in fact be formulated without recourse to any account of implementation in particular: basically, the idea is to show that if we accept that a system S implements computation C1, then we ought to accept, on the same grounds (whatever they be), that it implements computation C2. This sidesteps the issue of what, exactly, it means for a system to implement a computation, and leaves us with the conclusion that either the system does not implement any computation at all, or we can equally well associate (at least) two different computations to it.
The argument runs roughly as follows. Let's take a somewhat more simple process than instantiating a mind, e.g. the addition of single-digit binary numbers with carry. That's our computation C1. The formal structure of that is that we have two (single-digit) binary variables, x1 and x2, and two (likewise binary single-digit) outputs, y1 and y2.
Now, the outputs are generated from the inputs as follows: x1 XOR x2 = y1, and x1 AND x2 = y2. Hence, y1 is the binary sum of both bits, and whenever y2 = 1, we know that a carry has occurred.
Now we implement this computation with a physical system S. This system needs two inputs I1 and I2, and two outputs O1 and O2. How we instantiate them is immaterial; we could simply have a system that we can initialize in one of four distinguishable states, then 'switch on' its evolution, which then eventually settles into one of three distinguishable 'output' states (since only the cases y1y2 = 00, 01, and 10 can occur as a result of the computation). We may also imagine a device such that we can apply either high or low voltage to two wires designated I1 and I2, and receive either high (h) or low (l) voltage at wires designated O1 and O2.
Let's go with the latter implementation for concreteness. Then, we wire things up such that whenever a high voltage is present at either I1 or I2, O1 is at h, while if both inputs are either h or both are l, O1 is at l. O2 is at h if and only if both I1 and I2 are at h.
Now, clearly, if we choose high voltage to mean the binary digit 1, and low voltage to consequently mean 0, then I can use this system to perform addition. Moreover, if we chain many such devices, we can perform addition of arbitrary binary numbers---thus including also nontrivial computations where we don't necessarily know the answer beforehand.
This comment has been removed by the author.
Delete(continuing the above)
DeleteSo my stipulation is that under every suitable formalization of implementation, this device computes the sum of binary digits---otherwise, we have simply chosen a bad formalization, since we clearly would want to say that S performs addition.
However, it's just as clear that there is nothing necessary about interpreting high voltage as 1, and low voltage as 0. We could on equivalent grounds chose the opposite: low voltage represents 1, high voltage 0. The system then still computes a perfectly well-defined function, and does so on all of its inputs---so we have no issue with omitted counterfactuals and the like. Call the computation it performs C2. Clearly, this computation is not addition: the input x1=1, x2=1 will produce the output y1=1, y2=1, instead of y1=0, y2=1 as in the case of the adder.
However, if we are justified in considering S to perform addition, then we are equally as justified in considering it to perform C2---consequently, whether the system is an adder is not an objective fact pertaining to that system.
This conclusion is, in fact, general: the same trick can be performed for every physical system instantiating a computation taking some binary string to another binary string, and every (finite) computation can be framed in those terms. Consequently, we can do the same thing with any system we would consider to computationally implement a mind: if that computation is now C1, then there exists a C2, such that we have equivalent justification in asserting that the system performs this computation. Hence, whether a system implements a mind is not an objective fact pertaining to that system---which would seem to be fatal to computationalism, but is independent of whatever notion of 'implementation' we use, provided that this implementation entails that the system S can be understood to perform addition.
Hi Jochen,
DeleteYour praise is particularly appreciated at the moment as I'm getting little but hostility on Plato's Footnote at the moment in a debate on platonism.
It seems to me that the point you raise here is exactly the same one Bishop raised with his example of the AND/OR gate.
I feel I've answered this somewhere but it's probably quicker to type it out again than try to find it now.
If you interpret your mappings differently, you have a different function, sure, but the "contours" of that function are the same.
Like, if I have a white square drawn on a black background, I can invert the colours and I have a black square on a white background. It's a completely different picture from one perspective, but from another perspective it's still just a picture of a square. The contours are the same.
I feel like flipping the interpretations of the voltages is similar. It may be that the "contours" of the computation are what is important, i.e. the chain of cause and effect, not how we represent values as true or false. No matter how we map physical representations to truth or falsity, the same logical chain of cause and effect is taking place. The different mappings are entirely isomorphic, which to me means the two logical structures you get out of it are only superficially different with regards to how they are represented in notation but are identical in a deeper sense.
Hi DM,
Deleteit seems to me that the claim that it suffices for the 'contours' of the function to be the same to implement the same mind is a bit of a departure from standard computationalism. (Also, whether the difference between the black-on-white and white-on-black squares doesn't play a role is not obvious to me---you could use the two to transmit information, say white-on-black means 'don't launch the nukes' and black-on-white means 'launch the nukes'; this seems to be a difference we should take into consideration. And would Malewitsch have been as famous if he'd painted a white square?)
Usually, one would hold that the mind corresponds to the computation carried out; so that when that computation changes, so does the mind (and indeed, whether a mind is produced in the first place). That is, the mind is the user experience (supplying its own user) of a given program---but using the trick above, that user experience will differ (if I first was watching the latest Game of Thrones episode on my laptop, after the re-mapping, it's vanishingly unlikely that I'll be watching anything sensible at all).
Besides, the more complex the system, the more arbitrary its computational interpretation. Basically, I can interpret any system with 2^n input states and 2^m output states as implementing any binary function from n to m bits whatsoever, on equally justified grounds. If all of these were to instantiate the same mind, then that would mean that the mind is uniquely specified by the cardinality of its inputs and outputs, without any attention paid to the actual computation taking place. This seems wildly implausible to me.
Hi Jochen,
DeleteIt may be a departure from standard computationalism. I don't know as I'm not really a standard computationalist. But simply inverting the interpreted logical values of all the signals in the system yields a mathematically isomorphic structure, so if you view a run of an algorithm as a mathematical structure, it doesn't matter how the inputs and outputs are mapped to true or false -- it's the same structure either way. My view is that this structure is what is conscious. I think even a more physicalist computationalist would be likely to draw similar conclusions.
> you could use the two to transmit information, say white-on-black means 'don't launch the nukes' and black-on-white means 'launch the nukes';
Yes, but you could equally have the opposite convention. The "content" of the white and black (or in a circuit, the ones and the zeroes) doesn't matter. Computationalism is a flavour of functionalism, agreed? But on functionalism, all that matters is the functional role it plays in the system, and this will be unchanged no matter which mapping we use. It occurs to me that this is remarkably like Dennett's position on qualia which I share.
> Usually, one would hold that the mind corresponds to the computation carried out; so that when that computation changes, so does the mind
Not necessarily. I don't think trivial non-functional changes to a computation would have any effect on a mind. I maintain that just flipping the interpretation of all the bits does not meaningfully change the computation because you are left with an isomorphic structure.
> but using the trick above, that user experience will differ
I don't think it will, because the change in the computation is trivial and superficial. I regard isomorphic structures to be the same structure.
> Basically, I can interpret any system with 2^n input states and 2^m output states as implementing any binary function from n to m bits whatsoever
Possibly you can, using something like the pixies argument. But not using the argument from arbitrary interpretation of bits. As long as you interpret bits consistently throughout the system, then this trick only lets you flip between two negatives with the same contours, as it were. And the computationalist can plausibly insist on such a consistent interpretation.
Hi DM,
Deletemaybe we should try and agree on some definitions. To me, a computation is any realization of a computable function. Using binary logic operations, we are limited to some subset of the computable functions; but if we (plausibly) stipulate that computational resources are finite, then we can represent any (finitely) computable function under this paradigm. So for me, a computation is any process that implements some function f: [0,1]^n --> [0,1]^m, where the notation means the set of n- (respectively m-) bit strings.
Clearly, addition and whatever you want to call what happens when we exchange the bits are different functions on this view; hence, their implementations correspond to different computations.
You say that there is an isomorphism between the two computations. The most straightforward way I can see to make that notion precise (but please, correct me if you had something different in mind) is to understand the domains and codomains of each function f as an element of an n- (respectively m-) dimensional vector space over the binary field, and then to view each function f as an element of the Cartesian product of the vector spaces (i.e. as a list of tuples (x,y), with x being an n-dimensional vector with binary entries, and likewise for y with m dimensions).
Then, the mapping of bits to their inverse clearly yields an isomorphism; but the structure has many more isomorphisms, namely, all bijections into itself. But this would then mean again that all functions f (with identical domain and codomain) are isomorphic to one another (well, at least as long as we restrict ourselves to total functions, I guess).
So I don't see how to define a notion of isomorphism that singles out the bit-flipping, but doesn't collapse to identifying all other functions, as well.
Also, the way I understand functionalism is that it abstracts away from the implementation: that is, I could realize my adder with vacuum tubes or transistors, and which one I use, doesn't matter; but the example I gave does the opposite: it shows how to implement a different function using the same physical system. I think you'll end up trivializing the notion of functionalism if you require that all functions implemented using a single system be identified, while also allowing the variation of physical realization---one end of the association needs to be held fixed, and it's usually that one stipulates that the same function is implemented (which it isn't under the re-interpretation of the voltages as logical values).
So to me, under the first interpretation of the voltage values, the system performs one function---addition: if you input logical values, the output will be their sum. But if you change the interpretation, you change the function that is being implemented; and if you want to 'chunk' these two functions as being 'the same' in some sense, then I don't see any way how to consistently do this in a way that doesn't trivialize the whole notion.
Perhaps another way to see the difference between the two functions: if you feed the output of the system into a different one, that needs the sum of the two inputs in order to perform its function (say, a counter), then, upon changing the interpretation, the output arriving at the second system will no longer enable it to perform its operation correctly; while a function-preserving change (say, replacing transistors with vacuum tubes) will not be noticeable from the second system's point of view.
Hi Jochen,
DeleteA computation is not simply a realisation of a function. A computation is a realisation of an algorithm running on some input.
Bubblesort is not quicksort is not mergesort, but they all realise the same function.
This probably seems to contradict what I said in my last comment, where I said computationalism is a species of functionalism, and all that matters in functionalism is the functional roles played. Here, I don't think "function" necessarily means function in a mathematical sense, which is simply a mapping of inputs to outputs. It's more about function in a practical sense, e.g. the function of a pump is to move water. Different bits of logic within an algorithm play different functional roles even if the algorithm implements a particular mathematical function. Quicksort functions differently from mergesort in this practical sense. The logical flow of information through the algorithm is different in each case.
> hence, their implementations correspond to different computations.
I don't think so really, as the flow of information through the system is the same. It's the same causal network represented differently.
> is to understand the domains and codomains of each function f as an element of an n- (respectively m-) dimensional vector space over the binary field, and then to view each function f as an element of the Cartesian product of the vector spaces
No, that's not what I mean. What I mean is that if you were to draw a graph or a flowchart depicting how information flows through the system, those graphs would have the same structure if you flipped all the bits. You would just be changing some of the labels. True would become false. AND would become OR.
> then, upon changing the interpretation, the output arriving at the second system will no longer enable it to perform its operation correctly;
I don't see how simply changing your interpretation will have any effect. The same physical state of affairs will obtain. Everything should continue to function as before, no?
Hi DM,
DeleteI think the usual way to view computations would be the opposite of yours: an algorithm is a particular realization of a given computation, but different algorithms may exist performing the same computation. That's certainly the way these terms get used in everyday language: I may calculate, say, the determinant of a 3x3-matrix using Sarrus' rule, or by the method of principal minors---but in the end, I'll have computed the same thing. So different sorting algorithms are just different ways to perform the same function---after all, any process (such as a human user) that cares only about the outputs of the process won't be able to tell the difference (provided there are no time-critical elements).
Likewise, whether I play Counter-Strike on a PC or a Mac won't make any difference regarding the user-experience, but the internal machination may be quite different. If you want your account of mentality to depend on these internal machinations, it seems we're left with a zombie problem: systems performing the same function may or may not be conscious, depending on how that function is realized.
But anyway, my point is merely that the two functions that are performed under the re-mapping---addition and 'schmaddition', let's say---are appreciably different. If I went and did the same re-labeling on a computer performing a quicksort, then in the end, I won't be left with a sorted version of the original input; if I did it on one running Counter-Strike, then the output would no longer be what I'd expect from that program, and indeed, would probably not be anything sensible at all anymore.
And usually, the mind is supposed to be analogous to that output---i.e. to the user-experience of playing Counter-Strike, say. At least, that's how I'd understand computationalism.
Also, I don't think you can say that the flow of information in the system is the same under the re-labeling: the adder, if fed the values x1=1 and x2=1, will output y1=0, while the schmadder outputs y1=1---so there's a flip of the value in the first case, but not in the second. So if we were to graph this as a rail diagram, we would have the two ingoing bits at 1, and the first outgoing bit as 0, the second as 1, in the addition case---i.e. the value of the first bit is flipped; while in the schmadder case, both values stay the same. So the schmadder, in that particular case, outputs the same information as is input, while the adder doesn't.
I suppose you'd oppose the idea of having a different mapping for the different inputs---say, for I1, h means 1, l means 0, while for I2, h means 0, and l means 1. I don't really think there's fundamental grounds for this opposition, since under this mapping, I can use the system to perform computations perfectly well---but I'll accept it for now.
DeleteBut what if the information carrying systems on both inputs are physically different? Take for instance a hybrid electro-optical circuit, where I1 is realized by a wire where a voltage is applied, but I2 is realized by an optical fibre, into which you either shine horizontally or vertically polarized light.
Then it seems to me that any permutation would be as good as any other---I could use h and l to mean 1 and 0 at the wire, and hor and vert to mean 1 and 0 at the fibre; but I could just as well simply invert the mapping at the fibre, using hor as 0, and vert as 1. Doing something similar at the outputs---say, O1 is represented by a light that can be either red or green, while O2 is a speaker that produces one of two tones---I can implement any function from two bits to two bits. Would you still hold that all computations are the same in this case? Or is this somehow an 'illegal' implementation?
Hi Jochen,
Delete> I think the usual way to view computations would be the opposite of yours:
I don't think so, not among computationalists at least. You're describing behaviourism. Most computationalists would think that the logical structure of the implementation is indeed very important. For example, in principle, we might go about implementing a Turing Test-passing AI in two completely different ways (a) by simulating the neural network of a person or (b) by using an enormous lookup table, where the keys are the text of the conversation so far (terminating with the most recent input from the human interlocutor), and the values are the next response as would be given by a particular human. Most computationalists would regard the implementation in (a) as conscious and the implementation in (b) as unconscious, even though they might behave identically.
> but the internal machination may be quite different.
While I agree with the general point here (users don't care about implementations), I disagree with the specific point. If you play Counter-Strike on PC or Mac, at some level of description pretty much the same algorithm is instantiated in each case. That level of description is the one I would regard as being the essence of the Counter-Strike algorithm.
> it seems we're left with a zombie problem: systems performing the same function may or may not be conscious, depending on how that function is realized.
I don't see it as a problem. Most computationalists would agree with this. The implementation of the function is important -- not the physical details such as whether we use logic gates or vacuum tubes, but the logical implementation regarding how information flows.
> that the two functions that are performed under the re-mapping---addition and 'schmaddition', let's say---are appreciably different.
Again, while I agree with your general point, your specific example doesn't work. It turns out addition and schmaddition are equivalent. If you flip all the ones and zeroes in the truth table of a full adder, you end up with the same truth table (just with the rows in a different order). So maybe stick with Bishop's example of the logic gate which can be either AND or OR.
> I won't be left with a sorted version of the original input
Actually, I suspect you would, but the order of the input would be reversed. Instead of sorting in increasing order, you'd have sorted in decreasing order. A trivial difference.
> if I did it on one running Counter-Strike, then the output would no longer be what I'd expect from that program,
Ah, but it would. Because the input to counter-strike is not ones and zeroes but key presses and mouse movements. As long as the remapping were consistent throughout the system, then the user would not notice any difference. The difference would be internal and hidden -- but now the AK-47 is represented by the bit string 01110001 instead of the bit string 10001110.
Delete> would probably not be anything sensible at all anymore.
You seem to have this intuition that flipping all the bits consistently throughout a computation yields nonsense. I don't share that intuition. You will end up with a very similar, only superficially different function. AND yields OR. Addition yields addition. Again, think of inverting an image. You don't end up with random noise if you invert a clean image. You end up with an equally clean image depicting the same kinds of things. Similarly, if you invert the amplitudes in a sine wave you end up with a cosine wave, and if you listen to the two as an audio waveform then you literally cannot tell the difference.
So I see flipping as an entirely trivial operation that doesn't really change the character of the computation taking place.
> the adder, if fed the values x1=1 and x2=1, will output y1=0, while the schmadder outputs y1=1
So, since you don't specify the carry in, I guess you're talking about a half-adder. So you're right that addition in the half-adder sense isn't symmetrical. We turn the computation of an XOR and an AND into an XNOR and an OR. But I'm saying that if you flip the interpretation then you also need to flip the inputs. If you flip the interpretation without also flipping the inputs, then you're effectively just sending different inputs into the same algorithm. If you do flip the inputs, then you end up with the same structure. x1=0 and x2=0 will output y1=1 -- there has been a flip in the value as desired.
Flipping the inputs is justified because if we're looking at a running physical computation and trying to intepret what logical computation it was carrying out, our decision of whether to map truth to low voltage or high voltage will determine how we see the inputs. Our intepretation will change whether we see ones or zeroes, ANDs or ORs, but it won't change the relationship of the inputs to the outputs, e.g. whether the outputs are the same as or flipped from the inputs.
Hi DM,
Deleteregarding the lookup table, you'll get a perfectly sensible verdict out of the Turing test: the entity you're conversing with was indeed sentient, it's just that you're not conversing with the computer, but with whoever drew up the table. The computer is just a proxy, really. So the functional verdict that anything that acts like a conscious entity, ought to be regarded as such, still holds true.
Regarding Counter-Strike, it might be true in the specific implementation that both systems implement the same algorithm on some level (but then, that's true of any two systems performing the same computation, on some level), but one could draw up genuinely different algorithms leading to the same user experience---see bubblesort, quicksort and so on.
> It turns out addition and schmaddition are equivalent.
Again, I don't see a notion of 'equivalence' under which the two are equivalent, but any other functions of two bits aren't. (See also my last post on systems with differently realized inputs and outputs, which I suspect you may have missed.)
The adder, if we order the output bits as y2y1, yields the sum of its input values (sorry for the confusing labeling), i.e. A(x1,x2)=x1+x2=y2y1. The schmadder implements the function S(x1,x2)=x1+x2+1=y2y1. These are, to me, different computation. The computational task the adder solves is 'compute the sum of the inputs', while the computational task solved by the schmadder is 'compute the sum plus 1'.
Now if, by analogy, the task of the system under the original interpretation is to compute a mind, then I don't see any reason to suppose that after the re-interpretation, it still does---after all, after the re-interpretation, the system no longer computes the sum of its inputs. It solves a different task.
And this is a perfectly legitimate task, I might add: you might be interested in the sum+1 of two inputs, and if so, you can use the schmadder to compute it. No 'flipping of inputs' needed.
In general, flipping the interpretation won't yield anything sensible---a program outputting 'Hello World!' in binary notation would, after the flip, yield something not all of whose entries are even valid codewords under the standard ASCII-encoding. The same goes for Counter-Strike: only if you assume that the input/output devices compensate for the flip will you get the same user experience.
This would again be the additional flipping of inputs and outputs, but the point is exactly that you get a different computation out if you only change the interpretation---i.e. computing the sum+1 instead of the sum. I mean, you certainly can't forbid anybody who wants to compute the sum+1 from using my example system in that way!
Hi Jochen,
DeleteAs it happens, the response you gave to the Turing Test is the same as that I give myself, more or less, but this is not the response most computationalists would give. Most computationalists would say there is no consciousness in the lookup table example.
In any case, I think the computationalists are right that implementation algorithm matters for consicousness. There is a difference between a conscious person with locked in syndrome and an unconscious person in a persistent vegetative state, even though both of them might behave in precisely the same way. Computationalism is not behaviourism.
> Again, I don't see a notion of 'equivalence' under which the two are equivalent
Have a look at the truth table for a full adder. Swap all the ones and zeroes. It so happens that in this specific example, you will end up with the same truth table. Some functions are symmetrical under the bit-swap.
Like this simple example:
A B O
0 0 0
0 1 0
1 0 1
1 1 1
If you flip all the bits, you get
A B 0
1 1 1
0 1 0
1 0 1
0 0 0
It's the same truth table, just out of order. So this function is symmetrical under bit flipping. Addition (using the full-adder truth table) is the same.
I'm only saying that this particular example is not the best for making your point. I don't deny that there are plenty of functions which are not so symmetrical. The half-adder truth table is not, for instance, and that's what you're actually talking about so never mind me. It was just an interesting observation, I thought.
On your example with Hello World and CounterStrike -- you're flipping the bits within one part of the system and not flipping the bits in another. That's the only reason it won't work, because the two parts of the system can't talk to each other. This is entirely equivalent to inverting all the inputs and outputs to a computer running CounterStrike. It's still running CounterStrike but now it's getting nonsense as input and the outputs it's sending are garbled. I don't know what would happen if you actually negated all the signals on a monitor cable. I suspect you would be more likely to see something like an inverted image than nonsense.
Hi DM,
DeleteI don't really think my conception of computationalism collapses to behaviorism---after all, if I just cover the monitor of a computer, or in another way mask the output, the computation (according to somebody who believes that computation is inherent in the system) would still be performed. So there is room to handle the case of a locked-in syndrome patient.
Furthermore, while I don't know what most computationalists would say on the matter, it's clear that the retort that a lookup table's responses would suffice to indicate consciousness in the process that yields the generation of those responses is at least open to them---in that case, we're really just using a huge amount of 'precomputation', and we're testing the system that did the original computation.
Or, in other words, the lookup table and the computer reading off of it simply isn't the complete system, just as a walkie-talkie isn't the system being tested if you interview it. You might be deceived in attributing consciousness to the walkie-talkie itself, especially if it's installed into a complex-looking robot, but in that case, you're deceived merely about the location of consciousness, not about its existence.
>Have a look at the truth table for a full adder. Swap all the ones and zeroes. It so happens that in this specific example, you will end up with the same truth table.
Yes, there are specific examples where the re-interpretation does not change anything. But this is a special case, and the point I'm making is a general one.
To me, it seems a very reasonable definition to say that two computations differ if there exists at least one input on which they yield different results. In fact, I suspect that you'd readily agreed with this definition, if I'd proposed it at the start of this discussion...
Regarding Hello World, well, the idea is that you're looking at the binary output directly---just as in the case of the adder. I think the difference between the functions x1+x2 and x1+x2+1 is sufficient to at least call into question whether if one corresponds to the computation of a conscious mind, the other does, too; and it does show that which function is implemented computationally is not an objective fact pertaining to the system. And if that's the case, then to me, the most sensible conclusion is that there is no computation at all being performed absent being so interpreted by a conscious (more accurately, intentional) interpreter---you could claim that it then just performs both computations, but this would necessitate the introduction of some novel metaphysical principle (e.g. that anything that's consistent with the structure of the system is also realized) which really is just an ad-hoc patch. It seems absurd to me that each sign should mean anything it can conceivably mean---if a cat walks over my keyboard, the resulting garbage is just that, and not a sublime poem, even though an interpretation might exist such that it is mapped to one.
But if we need interpretation as a basis to having a certain system compute, then computation obviously can't be used as a primitive in explaining consciousness (intentionality).
Hi Jochen,
Delete> the computation (according to somebody who believes that computation is inherent in the system) would still be performed
Exactly. So the computation is whatever goes on behind some interface, not whatever happens at the interface. The function implemented by sorting algorithms is just the interface. The computation is the algorithm itself, so quicksort and mergesort are different computations.
> But this is a special case, and the point I'm making is a general one.
I understand. I was just saying we might as well stick with Bishop's example as it may be a better one. But the half-adder works too.
> it seems a very reasonable definition to say that two computations differ if there exists at least one input on which they yield different results.
I think that's necessary but not sufficient. Implementation details matter also.
> well, the idea is that you're looking at the binary output directly
If all the outputs are flipped, and you interpret them as if they have not been flipped, then it is not surprising that you won't read "Hello World". That does not mean that it is a different computation. That just means you don't know how to interpret it as saying "Hello World".
> and it does show that which function is implemented computationally is not an objective fact pertaining to the system.
I agree with you that there is not one correct interpretation of a system as carrying out a specific computation. There are many. We're discussing this on the other thread.
My point on this thread is that the flipping-bits argument is not convincing even to a physicalist computationalist because you end up with a mirror image of the original computation. Much is preserved in such a flipping operation (e.g. Counter-Strike is still Counter-Strike), and computationalism implies that what is preserved is what is important. Everything else is just substrate details, which the computationalist rejects as unimportant.
> you could claim that it then just performs both computations
That's not my claim. My claim is that they are the same computation. The two are isomorphic to each other. There is literally no logical difference between them -- the differences lie only in how we label them or implement them on a substrate.
> It seems absurd to me that each sign should mean anything it can conceivably mean-
OK, I agree with you in light of what we're discussing on the other thread, but I'll answer from the point of view of a fictional physicalist computationalist we'll call Ed...
Ed: Nobody is suggesting that we need to interpret any sign as anything it could conceivably mean. I'm just saying that the bit-flipping operation yields precisely one additional rendering of the computation that is happening, and this rendering is equivalent. Under bit-flipping, AND can become OR, but it cannot become XOR, NAND, or whatever else you might want. It has precisely one mirror image and the two are symmetric equivalents.
So, in isolation, AND is fundamentally the same computation as OR, in that the two are isomorphic. In order to meaningfully distinguish between them, you need to use them in concert with a consistent mapping of substrate state to truth value. Any time you flip the bits of an entire computational system consistently, you end up with an equivalent system. You only find problems if you do it inconsistently.
Hi DM,
Delete>The computation is the algorithm itself, so quicksort and mergesort are different computations.
Again, it depends on what you take computationalism to mean. Usually, I would say, the mind is supposed to emerge from the computation in the same way that, say, a simulation does---i.e. it's basically the output of the computation. Thus, computations that yield the same output, yield the same minds.
Note that I don't mean, by output, things like speech acts and behavior. Rather, you could consider the 'virtual machine' as Dennett uses the metaphor---the subjective experience is the result of the computation, in the same sense as the display of Counter-Strike on some screen is.
>I think that's necessary but not sufficient.
I'd say it's the other way around: it's sufficient, but not necessary, at least if I keep an open mind regarding things like mergesort and bubblesort being different computations (which nevertheless produce identical outputs).
But clearly, if you were to try and find out whether two computations differ from one another, then one thing you'd do is feed it with all possible inputs, and observe the outputs---and if you'd observe a difference in the output, you'd conclude it's a different computation. It implements a different function, so it's not the same program; it would necessarily be represented using different Turing machines, or different Boolean gates, and so on.
I mean, operationally, how else would you define a difference between two computational systems, other than seeing what they produce given test inputs? If their behavior is the same, then whenever you can use the one, you can use the other, as well. But if their behavior is different, then that's no longer true---not, at least, without introducing some additional machinery, physical or mental, to 'translate' between inputs and outputs. And such machinery you can add to translate between any two computational systems (subject to the usual cardinality constraints regarding input- and output-sets).
>If all the outputs are flipped, and you interpret them as if they have not been flipped,
I don't 'flip' the outputs (and inputs), not really: I'm just pointing out that there are (at least) two different ways to interpret them as corresponding to logical values within a computation, with either being just as valid as the other. (And of course, as in the example with differently realized inputs and outputs, there are, in fact, a great many more such ambiguities.)
So neither of the ways to interpret them as logical values is the 'right' one; consequently, neither is the 'flipped' one. Using one interpretation, I can perform addition; somebody else, using the other, performs schmaddition, with just the same justification as I do.
In the Hello World example, this then means that I, using one particular interpretation, observe the output corresponding to a binary encoding of 'Hello World'; somebody else, using an equally as valid interpretation, observes a completely different output. Likewise with Counter-Strike.
The system computes, on either interpretation, and either computation has as much claim as being 'what the system computes' as the other. There is no a priori way to say that *this* is the 'flipped' interpretation, and hence, you need to interpret the output as having been flipped.
Delete>you end up with a mirror image of the original computation.
I don't see why 'being a mirror image' doesn't point to a concrete difference. Certainly, if I were replaced by a mirror image of myself, my cardiologist would be quite surprised; and if my mirror image committed a murder, then my fingerprints would not be at the scene, and I wouldn't go to jail, since I wouldn't be guilty.
(As an aside, chirality is actually a well-worn problem for structuralist accounts: the fingers of your left and right hand instantiate the same structure, yet your left and right hands are distinct.)
Furthermore, looking at a negative is different from looking at the positive: I couldn't reliably select an outfit with matching colors if I was presented only with negative-color images of clothes (although, if you believe my wife, I can't do that even when presented with the positives, so the point may be moot).
I'll agree that it may be the case that two computations that are flips of one another may implement the same mind. But I don't agree that they have to, and I'd even consider it surprising if they did---after all, one can clearly distinguish the behavior of the adder from that of the schmadder, so I see no reason to believe that one shouldn't be able to do that if both of these computations instantiate a mind.
Hi Jochen,
Delete> There is no a priori way to say that *this* is the 'flipped' interpretation
But that's what I'm saying. I'm saying the two interpretations correspond to the same computation. I'm not saying one interpretation is more factually accurate than the other.
> and hence, you need to interpret the output as having been flipped.
You need to interpret the output only if you're in the business of interpreting the output. To interpret it as saying "Hello World", you need to use the mapping that yields "Hello World".
There is no fact of the matter about what mapping will yield the correct character sequence. That's an I/O issue. From the point of the computation, all that matters is the causal relations that obtain between the bits in various parts of the computation.
> Certainly, if I were replaced by a mirror image of myself, my cardiologist would be quite surprised;
Because you've flipped one part of a system but not flipped the systems it interfaces with. Another I/O interface error. But suppose there were a mirror earth where everything is precisely identical but for left and right being flipped. Your cardiologist would not be alarmed in that case. In my view, your conscious experience would be identical. You would call "left" right and "right" left.
> the fingers of your left and right hand instantiate the same structure, yet your left and right hands are distinct.
They are also located in different places, so I don't see how chirality specifically is a problem. Two identical spheres are also distinct even if they are perfectly symmetrical.
> Furthermore, looking at a negative is different from looking at the positive
Another interface mismatch.
> one can clearly distinguish the behavior of the adder from that of the schmadder
Only really if one has preconceptions about which interpretation to use. You can distinguish them side by side. But presented with a circuit built using unfamiliar or alien technology (where, say, bits are represented by north and south magnetic poles), you would be equally likely to figure out that it was an adder whether it was implemented with one convention or with another. Whether north or south represented truth, you would come away with the same impression of the behaviour of the system.
Hi DM,
DeleteI'm sorry, but I don't think I get the argument you're trying to make. In particular, these two assertions:
>I'm not saying one interpretation is more factually accurate than the other.
>you've flipped one part of a system but not flipped the systems it interfaces with
seem to be flatly contradictory to me. Either, both interpretations are actually equally viable---then, I can compute the sum of the inputs using one interpretation, and the sum+1 using the other. You can equally well say that the sum needs to have its inputs flipped and is hence an I/O error as you can say that the sum+1 does.
Or, one of these interpretations is the right one---then there is a fact of the matter regarding having to flip inputs and outputs; then, one of them is an I/O-error, while the other isn't.
Both taken together yields a contradiction, however: you're both saying that they are on equal grounds, and that one is an I/O-error, which implies that the other is correct, which means that they're not on equal grounds.
Similarly, regarding me and my flipped twin, there is no fact of the matter whether one belongs to 'mirror Earth' while the other is from here. There are people with situs inversus whose internal composition is a mirror image from that of the main population; but still, they're perfectly normal humans, just differing from the rest of the population in that respect.
You'd be right if there were this Earth, where the heart is on the left in all humans, and a mirror Earth where it's on the right; then, if we'd encountered a mirror human, we could judge that they belong to mirror Earth. Then, the situation of me on this Earth and mirror me on mirror Earth would be exactly equivalent.
But that's not the case: both ordinary and mirror humans are perfectly normal people. So all you can say upon encountering mirror me, provided that you know that I'm not a situs inversus case, is that he's not me.
So ultimately, it boils down to this: if I hand you the device, and one of the interpretations, could you tell whether it's right, or whether you need to 'flip' the inputs and outputs? If I only handed you the device, could you figure out which interpretation is the non-flipped one, the one where there's no I/O-error? Which function the system is meant to compute (because certainly, you'll at least agree that sum and sum+1 are different functions)?
And of course, that doesn't even go into the fact that for every input/output pairing a device produces, you could built suitable peripherals 'translating' it into whatever you fancy.
Finally, regarding chirality, the point is that if you'd specified a set of relations and a spacetime point at which to instantiate them, you'd still not know whether it'd be a right or left hand at that point---the structure itself underdetermines that question.
Hi Jochen,
Delete> I'm sorry, but I don't think I get the argument you're trying to make.
I'm going to stick with the AND gate example, because that's simpler.
Say I have a standard physical AND gate. It can be interpreted as an AND gate or as an OR gate, depending on how one maps voltages to truth and falsity. Treating it as an AND or treating it as an OR is a convention of interpretation and neither of them is factually accurate. I believe that AND and OR are at a deep level the same computation, and the difference between an actual AND gate and an OR gate is only that this computation is realised differently -- a substrate difference that can be ignored by a computationalist. However, this substrate-level difference means that we can use this computation in different roles when embedded in a larger computation and when working with other gates.
If you need and expect the gate to compute AND, either because you're manually looking at the inputs or outputs or because you're embedding it in a system with other components that use the usual mapping and where an AND is required, then you had better use the mapping that treats the gate as AND. Otherwise you will not get the behaviour you expect.
> Or, one of these interpretations is the right one
So neither of the interpretations is intrinsically right when it comes to the subcomponent we're talking about (the AND gate or the half-adder) in isolation. But you need to use an interpretation consistently across a system. If your design means you need to compute a sum rather than a sum+1, then you will need for the convention used by the rest of your system be the one corresponding to the sum interpretation. If you flip the interpretation of the rest of the system, then your design will require the half-adder to compute a sum+1 instead, so on that interpretation of the larger design the sum+1 is the correct behaviour. You only get into trouble when you have inconsistent interpretations.
> If I only handed you the device
If you only handed me a device, and that device did its own I/O, then I have no need to interpret any of its physical states as corresponding to truth or falsity. If I did want to make such an interpretation, e.g. for the sake of reverse engineering, either way would work just as well. It's just that I might find more recognisable components and ways of organising logic with one interpretation than another. A human is more likely to make a half-adder than an inverted half-adder computing sum + 1 even if equally simple overall designs are possible either way. So, though neither intepretation is right, one interpretation may be more natural for a human, and this will be a giveaway as to the designer's intent.
> because certainly, you'll at least agree that sum and sum+1 are different functions
Deletesum and sum+1 are different functions in the mathematical sense. But you have shown that the two computations as implemented by a half-adder are isomorphic. They are the same computation, and which function you interpret that computation as carrying out is a matter of interpretation.
> you could built suitable peripherals 'translating' it into whatever you fancy.
So, as long as you do that, then you are resolving any inconsistency in interpretation in the system and everything will work.
> the structure itself underdetermines that question
OK, I see.
So, my view here is that the idea that there is something underdetermined here is an illusion. I believe for instance that the idea of a mirror universe where everything is precisely the same as this one but chirally flipped is nonsense -- I think there is no difference between that universe and ours (this reminds me of inverted qualia thought experiments, to which I have a similar reaction). I think these relations are all there are. There is only a difference when one can have a relation between two things of differing chiralities, the way I can bring my hands together. I don't think chiral differences have meaning when the two entities are completely separate in their own distinct coordinate spaces and never communicate. The same goes for other symmetries. I don't for instance think that it makes sense to posit a completely separate universe just like ours but where the arrow of time goes in the opposite direction. In order to make sense of the idea that things have been flipped, there needs to be some sort of interface where the two come in contact. So I can make sense of the idea that there is a mirror-earth a very great distance away in our space, or the idea that there was a universe on the other side of the Big Bang where the arrow of time was flipped. But not the idea that these symmetries are flipped in another completely disconnected universe.
This echoes my view on the bit-flipping. From a purely abstract computational perspective, there is no difference between the mirrored computations. We only have issues when we bring differing interpretations into contact with each other and expect them to work together.
This comment has been removed by the author.
ReplyDeleteHi DM,
ReplyDeleteI think we're stuck. To me, the issue is perfectly clear: if I have two devices, and I enter the same commands into both, and they react differently, then they don't instantiate the same computation. It might be the case that two devices computing the same function count as, in some sense, different computations, but it is surely the case that two devices computing different functions are different computations.
(Actually, the more I think about it, the less I believe normal computationalism would even accommodate the difference between two different instantiations of the same function: an argument for computationalism is that you could replace neurons by a silicon chip showing the same behavior as the original neuron; on your conception, it would matter how this behavior is arrived at, i.e. whether it does quicksort or bubblesort, which leads to the whole fading qualia issue---i.e. functionalism is usually thought to entail a certain sort of modularity that seems absent in your version.)
And, not that it matters much, but the script for the Quantum Computation class I'm TA-ing exactly characterized computations as the functions from n- to m-bit binary strings; and thus, as the two functions performed by the adder and schmadder differ, they're different computations.
Perhaps somewhat more weightily, Kleene defines algorithms in terms of general recursion (see the wikipedia article), seemingly saying that different recursive functions are different algorithms. In that case, too, addition and schmaddition differ; so I can at least say that using Kleene's definition, both computations differ.
In fact, I haven't found the idea that two devices could produce different output on the same inputs, and yet still implement the same computation, anywhere---but that, of course, is just argument to authority.
What's more persuasive to me is that the sum of two inputs simply is not the sum+1 of two inputs: they're different mathematical objects, different Platonic realities, if you will. Likewise, a negative is different from its positive, and my left hand different from my right. Certainly, I could change the rest of the universe in such a way as to eliminate the difference---but I would have to change it! Usually, one does not consider identity to depend on the state of the rest of the universe.
So I don't think your arguments to identify the two are really conclusive; certainly, it's at least possible for two minds to differ, even if they are related by inversion (since after all, the two functions, related by inversion, do differ, and thus, some things related by inversion differ).
You're saying that because they instantiate the same structure, there's no difference between my left and right hands; I'm saying, because they clearly differ, structure apparently doesn't suffice to capture all of their properties. Perhaps we just have to leave it at that.
Hi Jochen,
DeleteI'd be reluctant to leave it as long as we don't seem to understand each other, and there are a few points where you seem to be misinterpreting me in a manner I hope I can clarify. I'm pretty confident that what I am trying to say makes sense, but I am less confident that I am expressing it well. So I'm going to go deep on these points in the hopes of being as precise as I can.
Firstly, let me admit that AND and OR are different functions. Clearly. If I'm saying they are the same, in some sense, then I am either crazy or ignorant or attempting to make a subtle point which is not usually relevant. I think it's the latter.
Suppose I define a binary function f where the domain and the codomain are the set {A,B}, where f(A,A)=A, f(A,B)=B, f(B,A)=B and f(B,B). This can describe the operation of either an AND gate and an OR gate. If we strip away all our baggage, our notions of truth and falsity, then this is what we are left with in both cases. I think it is this structure that is important to a computationalist -- at least it is what is important to me. Does this make sense to you?
We can distinguish AND from OR only when we bring this truth baggage or adopt a convention, as we do when we use an AND and an OR in the same circuit. But even then we can abstract the structure of the circuit as a whole and make the same kind of argument. When you strip out baggage associated with truth, or with ones and zeroes, such that you are left with purely abstract symbols such as A and B, then the logical structure of the circuit is immune to flipping operations. It is this abstract structure that I care about and not which values are true or false, one or zero.
> if I have two devices, and I enter the same commands into both, and they react differently,
What do you mean by having them react differently? Suppose one uses a white on black display, and the other uses a black on white display. Would you now regard them as performing different computations?
These are superficial differences, and to me, which convention we adopt to read "truth" and "falsity" is of this nature. I no more care whether a bit is true or false than I care whether the calculator uses black text or white. The way I see it, the device is not reacting differently at all, it just has a different I/O convention or substrate implementation, akin to implementing the device with transistors or with vacuum tubes. The two ways of realising this computation might be two different physical realisations, but the abstract algorithm or function they are realising are the same. An AND gate realises the function f by mapping A to TRUE and B to FALSE. An OR gate realises the function f by mapping A to FALSE and B to TRUE. These are simply different physical realisations of the same function f, and it's f that I care about, not AND or OR.
This comment has been removed by the author.
DeleteContinuing...
DeleteJust to note, I meant to say f(B,B) = B in the above.
> normal computationalism would [not] accommodate the difference between two different instantiations of the same function
That's another good, subtle argument, Jochen, to the point that I deleted my first attempt to answer it as I don't think it was adequate. My hat's off to you. It's a treat to discuss this stuff with someone who understands it as well as you do.
Normal computationalism denies that making very low substrate level changes matters. It does not deny the significance for consciousness of making high level structural changes in the logical flow of information throughout a conscious system. It's only when you can draw a boundary around a system and keep the changes contained that this kind of interchangeability of implementation is possible. I would agree, for example, that if there is a discrete part of the brain that only performs sorting operations and no information leaks about how it does it, then it doesn't matter whether it uses bubblesort or quicksort. Similarly, it wouldn't matter to the world at large if the algorithm of the brain as a whole changed as long as the brain behaved in the same way, but making such a change could potentially alter or extinguish the consciousness of that person, rendering them a zombie. So indifference to such changes depends on whether you are on the inside or the outside of the boundary. Since the conscious part of my mind can discuss its consciousness publicly, it is not contained within a unit that only does some low level function such as sorting. However I would accept the logical possibility of the implausible idea that there are parts of my brain that have their own semi-consciousness of which I am unaware. If those parts were replaced and the same function implemented entirely differently, then those parts might become zombies without any impact on the consciousness of my public self.
This brings to mind the China Brain thought experiment, where people are used to simulate neurons. I believe the overall system is conscious. We could replace the conscious people with simple unconscious machines and the overall system would still be conscious.
The reason I deleted my original comment was that the above does not address the fading qualia argument. It seems we could draw arbitrary boundaries around whatever parts of the system we like, no matter how small or how large and change the implementation without changing the interface. For instance, we could replace one whole hemisphere or the other, or both. I think the key here is that it is plausible that one could change the implementation drastically of components where the interface and implementation are loosely coupled, as with sorting. But I think most computationalists would deny that a whole hemisphere could be implemented without basically simulating all the neural connections and so on. Unlike sorting, it is not plausible that there is a radically different way of implementing that function. This is not quite so for the brain as a whole, as the interface for a part of the brain is a great deal more complex, consisting of all the synapses and nerves that cross that boundary. Wherever there is a narrow, simplified interface, there is potential to change implementation radically. But there are few such interfaces in the brain.
But anyway, this point perhaps isn't that relevant if we're debating only whether we can flip the bits.
> but the script for the Quantum Computation class I'm TA-ing exactly characterized computations as the functions from n- to m-bit binary strings
DeleteThat's a good characterization for the purposes of quantum computation. I don't think it works for philosophy of mind. Different definitions of computation will meet different needs in different contexts. I don't think there is one right way to think about it.
> In that case, too, addition and schmaddition differ
Only when interpreted according to a convention where the different states correspond to binary digits. If you abstract that baggage away from them as I did with AND and OR, then you end up with the same function.
> In fact, I haven't found the idea that two devices could produce different output on the same inputs, and yet still implement the same computation, anywhere
Nor would I expect you to, because that mischaracterises my position. My position is that schmaddition and addition are effectively realisations of the same function on different substrates. So, at the abstracted level I care about, they give the same outputs for the same inputs.
When the output of one computation trivially maps onto the output of the other, then they are the same computation. For instance, one technology might represent true in one way (a particle being spin up, say), while another might represent true in a completely different way (with an electric charge). Since computationalism holds that computation is substrate independent, it is trivially and obviously true that computationalism implies that however truth is represented in the system is immaterial. The fact that one system produces the output "spin up" and the other system produces the output "5V" does not show that they perform different computations. That's substrate level stuff. Neither does it matter if that output represents true or false, all that matters is the relation it bears to the inputs to the system. You're focusing on the truth baggage and ignoring that relation.
> What's more persuasive to me is that the sum of two inputs simply is not the sum+1 of two inputs
It's the same function if you abstract away the interpretation that the bits represent ones and zeroes and focus on the relation the outputs bear to the inputs, as I did with AND and OR. It's only when you interpret it as addition of binary digits that they are different.
> You're saying that because they instantiate the same structure, there's no difference between my left and right hands
There is clearly a difference between your right and left hands, and of course I must agree with that. So I have to account for this difference -- and I have done so. We can make sense of the difference because the two are connected by the fact that they are embedded in the same coordinate system. We can model their differences and distinguish between them, and so can a computer system. I'm only saying these differences disappear when they are supposed to obtain between completely disconnected entities. Give me an example where there is a clear difference of symmetry between such completely disconnected entities -- the way two mathematical structures may be completely separate, and you may have a point.
Hi DM,
DeleteI'm certainly game for continuing this, if you feel I'm not inordinately polluting your comment section. I should also say that I feel a bit bad for quibbling so much: I still think the piece you wrote above is excellent, and I wouldn't want to generate the impression that any of my disagreements with you detract from them.
That said, I believe we do have a substantive disagreement, which isn't just rooted in me not understanding what you mean (or vice versa). But let's see if we can't tease this out somewhat better.
>This can describe the operation of either an AND gate and an OR gate.
Well, as you've defined it, it's simply not a binary function: it's a function over the alphabet (A,B). You can them map this function to a binary function (over the alphabet (0,1)) in different ways---which is in fact an instance of the recurrence of the pixie problem within Platonic structures I've noted in the other thread.
You can also implement that function physically: say, high voltages stand for A, low voltages for B. Similarly, you can use a re-mapping to instantiate another function over the same alphabet, by interpreting high voltage as B, and low voltage as A.
The crux is really the association between physical states, and logical states---those are not the same, and mustn't be confused. So, without such an association, you simply don't have any computation at all; and whenever you make that association, you have to make a choice regarding which of the (at least) two possible interpretations to take, and hence, which of the two functions you instantiate.
>When you strip out baggage associated with truth, or with ones and zeroes, such that you are left with purely abstract symbols such as A and B
Truth and falsity, 1 and 0, and A and B are all equally as abstract symbols. In a sense, using 1 versus 0, or truth versus false, or A versus B is just saying that a distinction can be made between the two (recall Bateson's old dictum: information is a difference that makes a difference).
From this, it then also follows that one mustn't confuse the two, and to me, claiming that AND and OR are the same computation is exactly this confusion: they are, in the logical space where they are defined, completely distinct operations---however, due to the fact that the physical is not identical with the logical space, there exists more than one way to physically implement them. It's this inherent ambiguity that's really all I want to point out.
So consider the abstract space of functions over the binary field. Each such function can be considered as a pairing of inputs and outputs---a particular subset of the Cartesian product of the set of 2-bit strings with the set of single bits.
The AND function is then the set {(00,0), (01,0), (10,0), (11,1)}, while the OR function is the set {(00,0), (01,1), (10,1), (11,1)}. A mapping that identifies the two looses structure: it's not an isomorphism. That is, the mapping isn't invertible: knowing a given element in its codomain doesn't single out a unique element in its domain.
>Would you now regard them as performing different computations?
DeleteThat depends on whether the question of black-on-white or white-on-black is mapped to distinct logical states of the computation: if, for a certain decision problem, say, a black dot on a white screen denotes acceptance, and a white dot on a black screen rejection, then yes, there is obviously a difference. If, however, the distinction in the physical states is not mapped to distinct logical states---if I either get a black-on-whit rendering of 'Hello World' or a white-on-black one---then there is no distinction.
But in the implementations I gave, the difference between high and low voltage is mapped to distinct logical states; hence, there is a distinction in the computation that is being performed if this mapping is changed.
>I no more care whether a bit is true or false than I care whether the calculator uses black text or white.
You can't really say this in general. Again, if the difference between black or white text is mapped to a difference in logical states of the computation, then you need to care about it; otherwise, you just perform a different computation, one obtained from the original by merging those logical states corresponding to different renderings of the text. I agree that this isn't usually the case in calculators, but that's just a contingent fact about that specific device for implementing computations, and hence, not something upon which you can base general conclusions.
>it just has a different I/O convention or substrate implementation
You need to be more careful about what you mean by substrate here. A difference in substrate only doesn't matter if the substrate has sufficiently many distinguishable physical states in order to represent all distinct logical states of the computation. I can't represent a four-state FSA on a system that has only two states, but I can represent it on any system that has (at least) four states (which, depending on the mapping account you favour, may need to be causally connected in the right way).
If it has more states, then I can subsume physically distinct states---such as one with white-on-black versus black-on-white states---and map them to the same logical state; in a sense, then you have 'excess structure' that you can just forget about. Only then do you have the freedom to disregard certain differences in the substrate.
>Unlike sorting, it is not plausible that there is a radically different way of implementing that function.
Well, I wouldn't be so sure about that: since neural network computation is universal, and other things like algorithms, Turing machines, register machines and the like are, as well, there exists an equivalent implementation of any function that a set of neurons can perform within any of these other paradigms. But a Turing machine making symbol substitutions on a tape isn't in any fundamental way similar to a neural network adjusting its weights, say.
>If you abstract that baggage away from them as I did with AND and OR, then you end up with the same function.
DeleteAnd well, I don't think it's legitimate to abstracting away that 'baggage': this baggage is essential to the definition of the computation i.e. to the sequence traced out in logical space by following through its steps. If you throw away the distinction between 1 and 0, you end up with a different computation.
Consider again the adder and schmadder, and let's formalize them in a slightly different way. Both can be considered as finite state machines, starting in some blank state A0 resp. S0, and eventually ending up in one of four different output states, {00,01,10,11}.
If the adder, in state A0, receives input 0, it will transition into state A1; if it receives input 1, it transitions into state A2. Likewise, the schmadder transitions into states S1 and S2, respectively.
Now, state A1 of the adder is characterized as follows: upon receiving input 0, it transitions into output state 00, while if it is 1, it transitions into output state 01. In state A2, receiving 0, it transitions into output state 01, while if it receives 1, it transitions into output state 10.
Thus, states A1 and A2 can be characterized by their transition functions: A1(0) = 00, A1(1) = 01, A2(0) = 01, A2(1) = 10.
Likewise for the schmadder: in state S1, it transitions into output state 01 upon receiving a 0, and into 10 upon receiving 1; while in state S2, it transitions into 10 if the input is 0, and into 11 if it is 1.
Consequently, the transition functions for the states of the schmadder are: S1(0) = 01, S1(1) = 10, S2(0) = 10, S2(1) = 11.
Now, from this, we see that the states A2 and S1 are logically the same: they have the same transition functions. Inputting 0 or 1 into either yields the same result. If that were all that's to it, I'd agree that the schmadder and the adder are the same computation.
But that's not the case for A1 and S2: on input 0, A1 will transition to 00, while S2 will transition to 10; on input 1, A1 will transition to 01, and S2 will transition to 11. You can distinguish both states by test inputs---in fact, you can distinguish them on either input.
Consequently, states A1 and S2 are logically distinct. However, under the inversion mapping, both become identical: you loose the logical distinction between these states. Simply put, if you forget about which symbol means 0, and which means 1, then you can no longer distinguish between those states---but that doesn't mean they're identical, it just means you don't have a very good memory.
Again, when I say they're distinct, I mean in logical space, or whatever you want to call the abstract space where the computation is defined. They're different as mathematical functions of binary digits. Of course, that difference goes away if you employ a mapping that's insensitive to it, i.e. if you map both to a function f(A)=AA, f(B)=AB---but that just means that you've forgotten about what makes them different, not that they're the same. And that two things start to look the same if one forgets about what differentiates them is no great insight!
So, to me, the distinction between adder and schmadder lies in the fact that they contain a logical intermediate state that differs between both, in the sense that it implements a mathematically different transition function. It's at least not clear to me whether two minds, possessing a similar difference, are the same---in fact, I would be skeptical.
>My position is that schmaddition and addition are effectively realisations of the same function on different substrates.
DeleteThis seems to me to get the notion of substrate independence backwards: I can instantiate a computation using different substrates because these different substrates support a mapping to the logical states of the computation---i.e. substrate independence maps many different physical realizations to the same computation.
You're mapping two sequences of logical states to the same physical realizations, and arguing that because this is possible, they're the same. But that's not the case: it's just that the mapping you're using forgets about the differences.
This doesn't really have anything to do with truth, either: fundamentally, you simply have sequences of logical states, which may be identified with truth values, or not. What matters is whether these states are distinguishable within the logical space where the computation is defined; if they are, and your mapping does not respect this distinction, then you haven't found out anything about their identity, you've simply tossed the structure differentiating them overboard.
>It's only when you interpret it as addition of binary digits that they are different.
Yes, but it's also only when I interpret it as addition of binary digits that there's any actual computation happening. If I just look at the physical system producing different voltage levels, then that's not by itself a computation---there needs to be some association between the physical system and the formal structure that is the computation.
>Give me an example where there is a clear difference of symmetry between such completely disconnected entities -- the way two mathematical structures may be completely separate, and you may have a point.
Well, for any given structure, there are multiple ways of instantiating it; consequently, the structure does not determine its instantiation. This cuts both ways: for any given physical system, there are multiple structures it supports. So for any set of relations you care to specify, I can point to two different systems embodying those same relations.
Specifying structure only is a bit like handing me a paint-by-numbers picture, without a specification of how the colors are to be painted in: any choice of coloring yields a different painting.
Furthermore, any given painting can be cut up into areas of different colors in multiple ways---that's essentially how one can use one physical system to model another. At the extreme, you can cut each painting down to tiny pixels, specifying a number for each; and with the right assignment of numbers to colors, every other painting (of the same size) can then be 'filled in'.
In the same way, any set of relations can be realized by any set of physical objects, as long as there are sufficiently many distinguishable ones; and likewise, any set of physical objects can be imbued with any structure that doesn't need more than the objects we have. So all that structure really tells you is a single number, namely the number of entities you need to realize that structure. I don't believe that this is plausibly enough to capture the variety of our experience in the world.
Hi Jochen,
DeleteI think the paint-by-numbers analogy is excellent (even if I do say so myself -- it's more or less the same as the "contours" analogy I initially offered), so let's stick with that.
In terms of this analogy, my claim is that computationalism deems the contours to be important but that it doesn't care about the colours. Colours, to me, are a substrate-level detail, or the baggage that human reverse-engineers bring when trying to interpret what a computational system is doing. So I'm saying there is no canonical colour legend -- the legend is up to the painter.
When I say AND and OR are isomorphic, this is like saying that there is no structural difference between two paint-by-numbers paintings which are colour negatives of each other, say a painting of a black cat or a painting of a white cat. As long as we leave the interpretation of the colours out of it, one could use the same template for either painting. But I can only make that claim as long as this is a picture only of a black cat or only of a white cat -- what I'm saying only makes sense if they are considered in isolation. There actually is an important difference between the black cat and the white cat in a painting which depicts both of them at the same time. A paint-by-numbers template for a black cat and a white cat sitting together will make it clear that they ought to have different colours.
AND and OR are different in the way the templates for the pair of cats sitting together are different -- when used alongside each other, in the same logical space (as you say), such that the 0 in one is supposed to correspond to the 0 in the other. But they are the same as each other in the way the templates for the two solitary cats are the same. The equivalence I'm drawing only applies when considering computational systems as isolated wholes. But when we do consider them in isolation, bit-flipping doesn't change anything structural. And it is that structure that is important to computationalism.
To you, whether a value is true or false or a gate AND or OR is part of the definition of a computation. But that is how you think of computation, not how I think of computation (and I think most computationalists would agree with me if presented with these arguments). To me, these choices of interpretation yield different ways of describing the same structure. It's like the difference between 0.5 and 1/2. The notation we use (AND/OR, TRUE/FALSE) is not the structure we are describing.
> Specifying structure only is a bit like handing me a paint-by-numbers picture, without a specification of how the colors are to be painted in: any choice of coloring yields a different painting.
DeleteSo, yes, specifying structure only is exactly like that. Any choice of colouring yields a different painting, just as AND gates are distinct physical artifacts from OR gates. But structurally, considered in isolation, they are isomorphic. They are the same structure. And that's what computationalism is about (I claim).
> At the extreme, you can cut each painting down to tiny pixels
That's a separate argument from the bit-flipping argument I regard as particularly weak, so I'm not going to get into that. It may be that physicalist computationalism would have a problem here, but platonist computationalism doesn't.
You can reject my claim in a number of ways. You can regard it as ad hoc. You can claim that it is dubious that this is what a computationalist believes. You can claim that my account of computation is idiosyncratic.
But I think the tack you have been taking, which is more or less to dispute that I'm making any kind of sensible claim at all, does not seem justified to me. Your own example of paint-by-numbers shows that it is possible to consider the structure of something as distinct from "values" or "content" such as whether something is red or blue, true or false.
To be clear, I can broaden my claims to go beyond matters of interpretation.
Suppose for the sake of argument that we have a conscious AI realised with logic gates and circuits. My view, and I believe the physicalist computationalist view, is that we could build an equivalent AI by systematically replacing all AND gates with OR gates and all low voltage signals with high voltage signals such that we have an isomorphic relation between inputs and outputs. These two AIs would behave exactly equivalently, but for the fact that where one sends a low voltage signal the other sends a high voltage signal. Assuming we correct for this on the I/O layer, the two AIs will behave in precisely the same way and the computationalist view is that they will have the same conscious experience.
> You need to be more careful about what you mean by substrate here.
Sorry, but I feel that you seem to be engaging in some unnecessary quibbling here. I take the point you make here to go without saying. My point stands. It doesn't matter what those physical states are as long as the right causal pattern is realised.
> But a Turing machine making symbol substitutions on a tape isn't in any fundamental way similar to a neural network adjusting its weights, say.
A Turing machine can realise (by simulating) a neural network. From a computationalist point of view, there is no fundamental difference. Where you have messy interfaces, as you would have if you sliced out an arbitrary chunk of brain and tried to replace it with a functional equivalent, it is not plausible to me that you could implement the same interface without instantiating essentially the same causal pattern as you are replacing, e.g. by simulating neurons. On some level, you're going to have to implement the same algorithm. Conversely, where there are clean, compact interfaces which give no clues as to what goes on within the black box, it is often plausible that the same interface could be implemented with a completely different causal pattern (as in quicksort v bubblesort).
Hi DM,
Delete>In terms of this analogy, my claim is that computationalism deems the contours to be important but that it doesn't care about the colours.
That may be so (and indeed, I think it's basically the reason why computationalism fails---you only ever have the contours, which yield a picture only upon interpretation---a given way to assign colors---but thus obviously fails as an explanation of how the human mind does this interpretation), but the point with the AND and OR (or adder and schmadder) is that the distinction between 1 and 0 is a part of those contours---it's information, a difference that makes a difference, and the contours encode exactly those differences.
>Colours, to me, are a substrate-level detail
Again, that's only the case if the difference between states of the realization doesn't map to a difference in the logical space of the computation---i.e. if both black-on-white and white-on-black map to the same logical state.
There's no general way to claim that something is just a substrate-level property: all properties of the substrate may be mapped to logical properties of the computation. Meaning, every distinction on the substrate level may, or may not, be mapped to a distinction in the computational space.
>So I'm saying there is no canonical colour legend -- the legend is up to the painter.
But *some* legend is needed---otherwise, you don't have a painting.
You're right that a painting of a black cat and a white cat are the same, if you forget about the difference between black and white. But then, you've thrown structure away---the structure isn't *only* the contours, but also the numbers within them. Those are necessarily different, meaning that under every possible legend, the two cat paintings are different---it may be a painting of a white cat and a black cat on one, a painting of a black cat and a white cat on another, or a painting of a blue cat and a red cat under yet a third one. But it will always be paintings of cats of a different color, and hence, of different cats.
It's the same with AND and OR: yes, you can turn one into the other by a change in interpretation, but you can't turn both into the same, not without throwing away information, that is.
So, basically, you have 16 possible functions taking two bits to one. If you map all of the functions that are 'inversions' of one another into one, then you're left with eight different functions; and from knowing one of those eight, you can't reconstruct which of the original 16 you had---this is the definition of losing information, an irreversible transformation.
You can isomorphically map the 16 functions into themselves by inversion: then, AND becomes OR, and OR becomes AND. But you'll still have 16 different functions---OR and AND are still distinct. Take a square, which you can map into itself by mirroring it through a diagonal. Then, say, the lower left vertex is mapped onto the upper right, and vice versa---but that doesn't mean that, in any sense, the lower left corner is 'the same' as the upper right one! They're still mathematically distinct entities.
Or take two devices, one of which yields high voltage iff both its input are high, the other low voltage if both its inputs are low. Now, under re-mapping, you can make each of them implement AND or OR: but they'll always remain distinguishable, always perform a different function. Failing to preserve this distinction simply is loosing information, throwing away structure: the set of 16 functions becomes the set of eight, the number within the contour of one cat becomes the same as the number within the contour of the other, and you can no longer tell the difference.
>To you, whether a value is true or false or a gate AND or OR is part of the definition of a computation.
DeleteNo, that's not right. To me, that there is a difference between true and false is important---while what you choose to call either is immaterial. Your equivalence only holds if you take both AND---i.e. {(00,0), (01,0), (10,0), (11,1)}---and OR---{(00,0), (01,1), (10,1), (11,1)}---and map them to {(AA,A), (AB,A), (BA,A), (BB,B)}---which you can obviously do, but since that object doesn't uniquely map back to either AND or OR---the mapping you use isn't a bijection, and hence, not an isomorphism---that means you've lost information. The image of the space of binary functions of two bits doesn't have the same structure as that space itself.
Structure can be understood as given by the differences between the objects within a given domain. So take, for instance, a set of red balls: there's no information in that, because they are all identical. However, with a ball that could be either red or blue, I can represent one bit of information. If you're now saying that since I can use both blue or red to mean 1 or 0, both the blue and the red ball are really the same, then you're undoing this: you're effectively saying that red maps to (1 or 0), and blue maps to (1 or 0), which means that I can no longer use those colors to represent a bit of information.
Or, suppose I want to establish a communications channel between me, A, and B. With A, I agree on the convention that 'red ball' means 'meet me at 8', while 'blue ball' means 'meet me at 9'; with B, I agree on 'red ball' meaning 'meet me at 9', while 'blue ball' means 'meet me at 8'. I send a red ball to A, and a red ball to B, and consequently, meet up with A at 8, and with B at 9. Your interpretation would mean that both messages must be the same; but A and B understood them differently. And no matter what code I use, A and B will always understand them differently---except if I erase their difference by giving them the same code.
>The notation we use (AND/OR, TRUE/FALSE) is not the structure we are describing.
I agree on that, but the structure we have is only preserved if we keep track of the distinction between true and false, and consequently, AND and OR. Because ultimately, structure just is those distinctions.
> But structurally, considered in isolation, they are isomorphic.
The structure is specified by the contours of the painting *and* the numbers coding for different colors. You can make this more precise using Shannon information: each painting---each set of contours and numbers---corresponds to a message that the recipient knows to 'decode' by associating the right colors to the numbers, i.e. filling in the painting. The information content of each such message is defined by the full set of possible messages---i.e. of possible assignments of numbers to contours.
If we assume an equal distribution over possible color assignments---i.e. of numbers filled into contours, leaving the contours fixed---then, if there are n possible colors and m patches, there are n^m possible messages, each carrying an information equal to the base-2 logarithm of n^m.
If you now identify all messages that can be gotten from each other via color inversion, then we essentially only have n/2 distinct colors, and consequently, each message only carries an information equal to the base-2 log of (n/2)^m---that is, you've thrown information, and hence, structure, away.
To put it differently, messages obtained from each other by color inversion are different, no matter what code you use to map numbers to colors.
>we could build an equivalent AI by systematically replacing all AND gates with OR gates and all low voltage signals with high voltage signals
DeleteOf course: if you exchange both, then you get out an isomorphic program. But if you only exchange the meaning of the interpretation, i.e. change the logical states that the physical states map to, then the program will be different.
> the two AIs will behave in precisely the same way and the computationalist view is that they will have the same conscious experience.
It's only a tangential point, but that seems to me to be a departure from your earlier view that AIs being differently realized---while nevertheless showing the same I/O-behavior---would, in general, differ, viz. your example of a lookup table having no conscious experience. Have you changed your mind on that, or am I misinterpreting you?
>Sorry, but I feel that you seem to be engaging in some unnecessary quibbling here
I'm not, though. It's essential to get clear that substrate independence only holds if you only change things that don't map to changed in the logical structure of the computation---that is, you can validly say that a calculator representing its outputs in black-on-white versus one representing them in white-on-black possesses no functional difference, while a system where a black-on-white display is mapped to a different logical (e.g., output) state than a white-on-black display does possess such a difference.
In the former case, you're right, there's only a substrate-level difference that isn't reflected in any logical difference at the computational level; but in the latter, the two different states mirror different computational states, and that difference has to be respected---of course, it doesn't have to be reflected in those exact physical terms, but any substrate capable of implementing the same computation needs to possess a distinction capable of representing the logical difference.
>A Turing machine can realise (by simulating) a neural network. From a computationalist point of view, there is no fundamental difference.
Well, you can't really claim that bubblesort and quicksort are different computations, but that a TM and a NN computing the same function are the same computation---the difference between the latter (in the flow of operation, not just the physical realization) two is much greater than in the former case. (An algorithm simulating the TM, and one simulating the NN, would surely be as different as bubble- and quicksort.)
>Where you have messy interfaces, as you would have if you sliced out an arbitrary chunk of brain and tried to replace it with a functional equivalent, it is not plausible to me that you could implement the same interface without instantiating essentially the same causal pattern as you are replacing, e.g. by simulating neurons.
You can always implement that same interface with a large enough lookup table, which doesn't have any of the causal structure of neurons. So it's trivially possible.
Hi Jochen,
Delete> you only ever have the contours, which yield a picture only upon interpretation
The contours are a structure in their own right, though. You can identify that it's a template for a cat even if it hasn't been painted.
> is that the distinction between 1 and 0 is a part of those contours
Yes. But you can only draw that distinction if you have two cats with different colours in the same template. If you have two blank separate templates each with one cat, then whether the cats are the same or different colours is meaningless -- they can't be compared unless they're part of the same template (or realised by colouring them in). Whenever you try to make the point that there is a meaningful difference between AND and OR, you always do so by putting them in the same "logical space", analogous to putting two cards in the same template, and ignore the point that I'm only saying they are equivalent in isolation. I really want to emphasise this because I've said it several times now and you never seem to really absorb it. AND and OR are different when considered side by side. They are equivalent if considered in isolation. Do you understand what I mean by this? The paint-by-numbers examples ought to make it pretty clear, I feel.
> You're right that a painting of a black cat and a white cat are the same, if you forget about the difference between black and white.
So, this is not the point I am making at all. The two templates are the same if they are two separate templates. The very same template, with the very same numbers in each space, could be used to paint a black cat or a white cat as long as the painter is free to choose her own legend. But the two cats are different and must have different numbers within their spaces if they are in the same template.
It seems to me that every single argument you make about the difference between AND and OR, e.g. 16 functions reducing to 8, the corners of a square being distinct, the blue and red balls, etc., all fail because you haven't appreciated this point: what I'm saying only applies if considered in isolation and not when considered together. It may be that you don't understand what I mean, but if so there's no point in continuing this until you do. Ignoring this point won't get us anywhere.
> There's no general way to claim that something is just a substrate-level property
True. But I can identify which level of description I think yields the structure that is conscious. Everything below that is substrate. We agree that there is no one way to interpret a system as performing a computation, that's part of the "pixies" argument I address with platonism and the MUH.
> To me, that there is a difference between true and false is important
That there is a difference is preserved when you flip the bits. But you can't say there is a difference between a value in one interpretation and the corresponding value in the bit-flipped interpretation, because you're not comparing like with like. They exist in different logical spaces and cannot be compared. You can only compare values within an interpretation, e.g. whether this input is the same value as this output.
Delete> while what you choose to call either is immaterial
Perfect. So when we flip the bits, we are simply changing the label "true" to "false" and the label "AND" to "OR". This change is immaterial. Differences and equivalences within the interpretation are preserved, and that's all that matters.
> the structure we have is only preserved if we keep track of the distinction between true and false
I am keeping track of that distinction. Within a given interpretation. Not across interpretations.
> Have you changed your mind on that, or am I misinterpreting you?
My view is that flipping the bits or replacing all signals and gates with their inverted versions produces essentially the same computation. So I haven't changed my view.
> Well, you can't really claim that bubblesort and quicksort are different computations, but that a TM and a NN computing the same function are the same computation
I do claim this, at the right level of description.
Quicksort and Bubblesort are completely different algorithms. The source code of a NN algorithm and a virtual TM implementing that NN algorithm will be very different, but you can find the NN algorithm in each, just at different levels of analysis.
> You can always implement that same interface with a large enough lookup table, which doesn't have any of the causal structure of neurons.
OK. But to produce such a lookup table you would have to simulate the neurons. It's the same point on passing the Turing test using a lookup table -- there's still consciousness, it's just offset in time.
CORRECTION:
DeleteIn the first of the previous two comments, I said
> you always do so by putting them in the same "logical space", analogous to putting two cards in the same template
I meant to say:
> you always do so by putting them in the same "logical space", analogous to putting two CATS in the same template
Hi DM,
Delete>If you have two blank separate templates each with one cat, then whether the cats are the same or different colours is meaningless
But for every choice of coloring scheme, one cat will turn out one color, and the other a different one, if the numbers differ. They don't map one-to-one to specific pictures, that much is true; but that doesn't mean they're the same.
>you always do so by putting them in the same "logical space", analogous to putting two cards in the same template, and ignore the point that I'm only saying they are equivalent in isolation.
I do realize that, and in fact, that's what I've been trying to break you free from these last few posts---because it's erroneous: an object's information content---its structure---can only be defined with respect to the set of objects it's drawn from.
Take a blue ball: if I send you one, you wouldn't know anything due to that. But if I could have equivalently sent you a red ball, then the blue ball carries one bit of information; if it could have been blue, red, green or yellow, it carries two bit; and so on. So, whether or not there is any information contained in the ball depends on the set of balls---the alphabet---it's drawn from.
Or take a set of identical red sports cars. As such, it carries exactly as much structure as a set of identical red balls, i.e. none. Only if I start introducing distinction does any structure at all enter---say, I offer the cars with two different engines, different wheels, different bodykits, and so on. Only then does it make sense to specify which car you want; beforehand, they were all the same, and there's no meaningful choice---since there's no meaningful difference---between them.
Structure is carried by relations between objects; consequently, I must specify the set of objects to specify the structure.
If we want to identify something by means of structure, we have to define the set of objects to which it belongs. We can't identify a given sportscar, or a ball, out of a set of identical ones; if we color one half blue, the other half red, we can only identify them as belonging to either half; with each further distinction we introduce, the number of individuals we can identify increases. In the same way, I can identify AND as AND, and not as OR, within the set of binary functions of two arguments.
If, however, I only had 16 identical 'copies' of the function AND, then I could not identify one of them---and indeed, in this case, there doesn't even exist any mathematical machinery for doing so.
Likewise, you're essentially postulating a set reduced by the relation that exchanges one and zero; that in this set, where you've taken a distinction away, I can suddenly no longer distinguish between two functions is no great surprise. But then, you're also not pointing to the set, the logical space, within which AND and OR are usually defined---you've drawn up a different one, related to the usual one by a two-to-one mapping.
It's the same thing as if you said that because you can exchange red for blue in sportscar-coloring, and still have the same set of relations obtain on the set of sportscars, you can't tell the difference between a red and a blue sportscar that only differ in color anymore: it's true, but only because you've chosen to ignore the thing that differentiates them in the structure you've drawn up.
A different way to see the same thing is that without making any assignment of logical values to voltage levels, I don't have any computation. After I make some such assignment, then I do have one---and then, what I have is always either an OR or an AND. On the other hand, some device with the voltage levels reversed immediately becomes the other function.
DeleteIn analogy, I only have a picture of a cat if I color it *some* way; and under any coloring, a cat with color no 1 and a cat with color no 0 will come out differently colored, even though a coloring exists such that cat 1 is black and 0 is white, just as much as one exists where 0 is black and 1 is white. Claiming differently throws away structure on the set of cat pics, and hence, erases their difference.
>My view is that flipping the bits or replacing all signals and gates with their inverted versions produces essentially the same computation. So I haven't changed my view.
That's not what I meant. Earlier on, you had claimed that two differently implemented functions, with identical behaviors, may instantiate different minds (even no mind at all). Now, you're saying that two AIs are the same if they show the same behavior.
>OK. But to produce such a lookup table you would have to simulate the neurons.
You earlier argued that implementing the same functional input/output behavior doesn't suffice for being considered 'the same' on computationalism. Now, you're saying that it suffices to have the same behavior to conclude that the same computation must take place. You can't really have both: either, you accept the claim that the same behavior entails the same computation, or you'll have to accept that the same outputs for some part of the brain could be produced by a radically different computation.
Besides, it's also possible to show the same input/output behavior by mere random chance---so one could replace a part of somebody's brain by a device just randomly generating output, and there's a nonzero probability that it will produce exactly those outputs that a functioning brain part would have produced. And in this case, it's clear that there's not some isomorphic computation being carried out anywhere.
Or perhaps let's try a different example. Say you have an abstract set of four balls---ball universals, ball archetypes, whatever---A, B, C, and D. They're all different, and say they differ along two properties---size and color.
DeleteNow, any set of four balls such that they have two different sizes coming in two different colors realizes that structure. How they realize it, however, is up to you---one possibility is that 'ball A' is large and red, 'ball B' is small and red, 'ball C' is large and blue, and 'ball D' is small and blue.
Given a single, small, blue ball, you don't know whether it's A, B, C, or D---under the appropriate mapping, it could be either. Which ball archetype is realized is not an objective fact about the ball---that's the analogue to the fact that which computation is realized is not an objective fact about a physical device. But that doesn't mean that there's no difference between the balls.
Now, my argument is essentially that even if you know that you have a large ball, and only A and C are large, you still don't know which of these it is, if you don't know which color either maps to. This corresponds to having a physical system which can either realize the truth table of AND or OR, but, without a mapping of voltage levels to binary values, you don't know which of the two it is.
Your argument then is that because I don't know that, A and C must be the same: given a certain color scheme, my ball could be either A or C. But the latter (which is true) doesn't imply the former: A and C are different, by virtue of the structure of the set of balls. It's only if you implement that structure incompletely that both seem the same---i.e. it's only if you only distinguish between sizes, turning the set into balls that are either large or small, that this holds true.
That's why the full set of objects matters: it's on this level that the objects are individuated; if you 'forget about' the differences on this level, then you've effectively exchanged the set of objects for a different one, in which this difference isn't present---i.e. a set of balls L and S, rather than the set of balls A, B, C, and D.
The analogy works even better with pairs of balls: if I have a large ball and a small one, then I know that I have neither two large balls nor two small ones, and only the color degree of freedom persists. But then, still, the set of balls (A,B) is different from (C,D), except when I forget about the color, i.e. about one of the differentiating relationships within the set of all balls.
Hi Jochen,
Delete> But for every choice of coloring scheme, one cat will turn out one color, and the other a different one, if the numbers differ.
I suggested that the same template could be used to paint different cats, so the numbers don't differ in my example.
But anyway, in this analogy, I am saying the colour scheme is not part of the structure, so even if different numbers were used they might end up painted the same colour. The structure is just the contours of the paint-by-numbers template and the relationship between the different colours in terms of whether different regions are the same colour or different colours from each other.
So, given two copies of the same template, one artist could paint a black cat while another artist could paint a white cat. There is no structural difference between the two paintings, if structure is just the contours and patterns of similarities and differences. They are two different physical paintings, just as an AND gate is a different physical object than an OR gate, but there are no structural differences in the features that I am choosing to highlight as structural. Now, you might point out that the choice of which features to highlight as structural is arbitrary, but this is more of a pixies argument than a bit-flipping argument so I won't contest that here.
> an object's information content---its structure---can only be defined with respect to the set of objects it's drawn from.
Agreed on the information content. Disagreed on the structure. I don't think the two are the same.
An object can have structure that is internal to it irrespective of the set of objects it's drawn from.
I might get a red ball and a blue ball, and you might say I have two bits of information. But what if there are 8 possible colours? Then I have six bits of information. If there are 256 possible colours, sixteen bits, and so on. You can always bring in stuff from outside the system, expanding that set, and so inflating the amount of information in the system you're looking at.
And yet, presented with a red ball and a blue ball, I can see structure. I can see that I have two entities which are different from each other. The information content is undefined, but that is not to say there is no structure.
I guess what I'm saying is that when considering a system in isolation, we can assume that is all there is when describing its structure and disregard potential alternatives, and that is what I am doing. We can assume for instance that there are only two kinds of balls, and this is sufficient to allow us to describe the structure we see when presented with a red ball and a blue ball.
> As such, it carries exactly as much structure as a set of identical red balls, i.e. none.
I wouldn't agree with that. Even if all the entities are identical, there is still structure. A set of five identical entities is a structure. It is different from the empty set or a set of six identical entities.
> Structure is carried by relations between objects; consequently, I must specify the set of objects to specify the structure.
Agreed. And the relations of the inputs to the outputs in an AND gate are the same as the relations of the inputs of the outputs in an OR gate. There is only a difference when you start comparing the inputs of the AND gate to the inputs of the OR gate, when you start identifying the label "1" in the OR gate truth table as corresponding to the label "1" in the AND gate truth table. I'm only concerned with the structure within a system, but you're bringing in relations to things outside the system. This is clearly wrong to me. Whatever is outside the system can have no bearing on whether it is conscious.
> Claiming differently throws away structure on the set of cat pics, and hence, erases their difference.
DeleteYes, but by analogy, what we have is not a set of possible cat pics, but one solitary cat pic. I'm not throwing away any structure internal to that picture, just structure that would relate it to hypothetical things outside itself.
> Now, you're saying that two AIs are the same if they show the same behavior.
I didn't say that. I said it would have the same behaviour and the same conscious state. That is not the same as saying that it has the same conscious state because it has the same behaviour.
> Besides, it's also possible to show the same input/output behavior by mere random chance
Yes, that kind of argument might work against a physicalist computationalist. I'm not going to defend it here as it is not a bit-flipping argument and I am not a physicalist computationalist.
> Given a single, small, blue ball, you don't know whether it's A, B, C, or D
The structure of a single, small, blue ball is best described as the set of one element. There is no reason at all to regard it as being one of a set of alternative elements which are not present.
> But that doesn't mean that there's no difference between the balls.
Of course not. But the structure of a system which consists of one small blue ball is the same as the structure of a system which consists of one large red ball. Both are sets of one element. I'm not saying one ball is the same as another. But I am saying that [the computationalist position is that] structure is all that matters for consciousness, so the differences between the balls only matter if we have them as part of the same system.
> but, without a mapping of voltage levels to binary values, you don't know which of the two it is.
Either/or! It makes no difference. This is just a matter of notation.
Hi DM,
Delete>An object can have structure that is internal to it irrespective of the set of objects it's drawn from.
OK. And the structure of the AND-function is given by its truth table, which is different from that of the OR-function. (Equivalently, you can formalize the structure of the AND-function by a construction analogous to the one I gave earlier of the adder; and as in the case of the adder and the schmadder, you will find that there exists a logically different element within both, and only upon eliminating that difference do both become the same.)
So I can individuate the AND-function in two ways: one, by giving its truth table; two, by noting its differences to all the other two-bit functions. These notions of identity agree, unless you throw away information.
On your argument, the truth table of AND and OR become indistinguishable---but only because you use an information-lossy mapping.
>You can always bring in stuff from outside the system, expanding that set, and so inflating the amount of information in the system you're looking at.
You can't add any more functions of two bits---there only are 16 of them.
>I guess what I'm saying is that when considering a system in isolation, we can assume that is all there is when describing its structure and disregard potential alternatives, and that is what I am doing.
Truly considering the AND-function in isolation would mean to consider it as a function of bits---i.e. taking the input 11 to 1, and all others to zero. This is different from an OR considered in isolation---which takes the input 00 to 0, and all others to 1.
What you're doing is looking at a certain system that may implement either AND or OR, and considering that in isolation---but this doesn't tell you anything about those functions, and their putative identity; it tells you merely that which of those functions is implemented is undecidable if you have access only to that system in isolation.
Let's simplify this down as much as possible. 1 and 0, considered in isolation, as binary digits, are different; nevertheless, your blue ball can represent either. This entails that what we consider to represent 0 or 1 is arbitrary; it doesn't mean that 0 and 1 are the same. But if they're not the same, then neither are AND and OR.
>A set of five identical entities is a structure. It is different from the empty set or a set of six identical entities.
How do you know if you have five identical things, as opposed to six? How do you count identical things? You can't point to one and say it's 'this one', since whatever property you use to identify it, all other elements of the set will share it.
I'm also not sure if I'd say that a singleton set has structure: the only relation you could define on it is the unary relation consisting of itself, which strikes me as rather trivial---normally, I would only consider two- or more-place relations as actually structural.
>Either/or! It makes no difference. This is just a matter of notation.
Again, it makes all the difference where AND and OR actually are defined---in the space of functions of two bits. Yes, we can represent them differently in the physical world; but just because you can use the same set of symbols for two entities, doesn't make the entities the same. 'Host' can refer to somebody who entertains guests, a computer in a network, or an army of angels, but that doesn't mean those things are identical.
This comment has been removed by the author.
Delete(Messed up my comment, here it is again)
DeleteHi Jochen,
> And the structure of the AND-function is given by its truth table
I would normally agree with you, but in this context I think that is subtly incorrect. The truth table describes the structure of the AND-function, but the truth table is just a kind of notation. It is not identical to the AND-function it describes.
Substituting 0s for 1s is just a change in this notation, akin to changing the label used for a variable in a quadratic equation. 2a^2 + 2a + 9 = 0 is the same structure, the same parabola as 2b^2 + 2b + 9 = 0, as long as we don't take the choice of variable name to have some meaning as it relates to something outside the system. Substituting 0s for 1s therefore yields the same structure, but now the truth table looks like that for an OR table.
> taking the input 11 to 1, and all others to zero. This is different from an OR considered in isolation---which takes the input 00 to 0, and all others to 1.
If you are truly considering them in isolation, this is no difference at all. As with 2a^2 + 2a + 9 = 0 vs 2b^2 + 2b + 9 = 0, it's the same pattern, using different notation. If you regard 1 or 0 to have meaning beyond notation, then you are bringing in relations to things outside the system, which I don't think is legitimate, especially if you claim to be considering them in isolation.
> it tells you merely that which of those functions is implemented is undecidable
It's undecidable because it makes no difference. In isolation, AND and OR are the same function. They are only distinguishable when part of the same system. I agree that I am throwing away information, but the information I am throwing away has no bearing on the system as it only pertains to relations between things inside the system and hypothetical things outside the system.
> 1 and 0, considered in isolation, as binary digits, are different
Considered as numbers, they are different. 0 is even, 1 is odd. 1 is the identity of multiplication, 0 is the identity of addition. I can divide by 1, I can't divide by 0.
But considered purely as symbols, as is appropriate here, they are as interchangeable as a and b in a quadratic equation. We may need to distinguish between them within a system, but no structural aspect of the system relies on an absolute difference such that a uniform replacement of all such symbols throughout the system would not yield the same structure.
> How do you know if you have five identical things, as opposed to six? How do you count identical things?
In the real world, it is because nothing is identical to anything else. Supposedly identical objects which are in fact distinct are at least in separate locations.
In the abstract world, it is because we don't need to count. We just define the set of having 5 indistinguishable objects -- indistinguishable only because the structure does not describe any properties of those objects by which they can be distinguished.
> Again, it makes all the difference where AND and OR actually are defined---in the space of functions of two bits.
AND qua AND and OR qua OR are defined in the space of functions of two bits and can be distinguished from each other when they are in this logical space.
But AND qua an f-function and OR qua an f-function (as defined upthread) are not defined in terms of bits but in terms of abstract interchangeable values. My claim is that how a computationalist interprets an AND gate in isolation is not as AND gate qua AND gate but as an f-function, just as she would interpret an OR gate in isolation, so that the two gates are interpreted just the same and so correspond to the same computation.
Hi DM,
Delete>But considered purely as symbols, as is appropriate here, they are as interchangeable as a and b in a quadratic equation.
I think this is the problem: to me, this is a confusion between the symbol and its referent. 1 and 0, as they occur in the truth table of the functions AND and OR, are the elements of the two-element Boolean algebra B2, or the finite field with two elements. The symbolic level is that of the physical realization---i.e. the blue ball.
Of course, you can label these elements arbitrarily---1 and 0, a and b, or even 0 and 1. But that doesn't change their identity: 1 is the maximum element of B2, 0 is the minimum. In contrast, as it's usually used "1" is the name of---symbol referring to---the maximum element, and "0" the name of the minimum element. You can exchange these names, or give them different names, but this doesn't change their identity.
Otherwise, you'd essentially be arguing that just because I can refer to two things by the same name, they must be the same thing---which clearly isn't right, see the 'host' example. So it's not the case that because you can use the blue ball to refer to the greatest element of B2, and equivalently to refer to the least element of B2, that the greatest and least elements must be the same.
Likewise, if we write down the truth table of AND, we can eschew the shorthand "1" and "0", and write instead "greatest element of B2" and "least element of B2". So then, AND is uniquely that function that takes two 'copies' of the greatest element of B2 to the greatest element of B2, and every other input to the least element, while OR is uniquely that function that takes two copies of the least element to the least element, and every other input to the greatest. Whether you use high voltage or low voltage to 'name' either of these doesn't play a role---ambiguity in names does not imply identity.
Or, to take a different tack, consider again my example of a system with different physical realizations of the inputs and outputs: say I1 and O1 are realized electronically, while I2 and O2 are fibres carrying polarized light.
So we have the possible states (h,l) for the electronic I/O channels, and (hor,vert) for the photonic ones. Clearly, now, whether I map hor or vert to 1 or 0 doesn't depend on whether I map h or l to 1 or 0. So under your argument, not only would the adder and schmadder be identical, but they'd also both be identical with the functions I get when I flip I2 and O2 simultaneously; and, for four differently realized channels, all functions from two bits to two bits would b 'identical'.
But this really only tells you that you're using a bad definition of identity---that effectively means that two things are identical when I can call them the same name.
Hi Jochen,
Delete> I think this is the problem: to me, this is a confusion between the symbol and its referent
There is no confusion. But we each conceive of the relevant structure differently.
For you, the semantics of 1 and 0 are important. I said earlier some time in this long correspondence (can't find it now) something along the lines that you care about what 1 and 0 means, whether they are true or false, and I don't. Whether these bits are the minimum or maximum element is just another instance of that. For me, this is irrelevant. That's part of the information I am discarding as extraneous and unimportant to the structure I regard as conscious.
As you have said yourself, there are many ways of slicing things up into different kinds of structures, depending on what details you want to consider structurally important. That is a plausible argument against physicalist computationalism. But you can't have your cake and eat it too! If what level of detail one includes in one's structural model is arbitrary, then there is no one right way to do it, and so I can't be wrong in ignoring a detail that you consider to be important.
So again: my claim is that what a computation is to a computationalist is the value-agnostic bit-flipping-independent causal pattern realised by an implementation of some function in logic. That there is such a pattern seems to me to be beyond doubt -- the only difference between us is that this is just one level more abstract than the pattern you are focused on and it is precisely the pattern I am focused on.
> Otherwise, you'd essentially be arguing that just because I can refer to two things by the same name, they must be the same thing
If I were actually arguing this, and I saw all bits as equivalent, then I would see no difference between AND and XOR, because they would each be a mapping of two bits to one bit. But I do see a difference between AND and XOR. They have quite different patterns of correspondence between inputs and outputs. I'm only saying that values are interchangeable if exchanged uniformly across a system such that causal relations are preserved.
The arguments you are not making are not sensitive to the fact that the kind of equivalence I am drawing for AND and OR does not hold for all pairs of functions of two bits input to one bit output, but only to certain pairs: AND/OR, XOR/XNOR, NAND/NOR, FALSE/TRUE, A_AND_NOT_B/A_OR_NOT_B, NOT_A_AND_B/NOT_A_OR_B. Some functions are their own mirror images and so have no equivalents by substitution of true and false -- A, B, NOT_A, NOT_B.
Incidentally, by another kind of symmetry (which input is designated as A and which is designated B) I could also lump together A_AND_NOT_B/A_OR_NOT_B with NOT_A_AND_B/NOT_A_OR_B, A with B and NOT_A with NOT_B.
But I cannot lump together any other functions. There is no symmetry which makes AND equivalent to XOR, NOT, FALSE, TRUE, A, B, A_AND_NOT_B, A_OR_NOT_B, NOT_A_OR_B, NOT_A_AND_B or XNOR. So, whatever I'm doing, I'm clearly not simply treating bits as equivalent.
> So we have the possible states (h,l) for the electronic I/O channels, and (hor,vert) for the photonic ones.
That's an excellent argument, and I would like to see how a physicalist computationalist would answer it. I have been arguing from the point of view that there is some underlying physical consistency between how signals are implemented. If you're throwing out that assumption then you're no longer in the original simple bit-flipping territory I regard as a poor argument against physicalist computationalism.
On the introduction of photonic signals, I said...
Delete> That's an excellent argument,
Well, I'm not so sure.
I still don't really want to argue against it because it's not the original obviously (to me) weak argument, but I'm not sure it works.
A single logic gate is very trivial, but even so, the kind of scheme you propose doesn't seem like it makes all functions equivalent. For example, in the function FALSE, it doesn't matter what the inputs are, the outputs are always false. No matter how we interpret signals, any physical object which switches outputs depending on the value of the inputs is not reasonably interpreted as instantiating FALSE.
For more complex circuits involving many gates, then it seems to me we could draw meaningful abstracted patterns such that we still have structure independent of how photonic or electrical signals are interpreted. Complexity imposes constraints on how free consistent interpretations can be. It seems plausible that what structure remains could still be identified with what is supposed to be conscious.
But I don't want to argue the point because it's straying too close to what might be a plausible argument against physicalist computationalism and on this thread I'm focused only on refuting what I see as an implausible argument.
CORRECTION:
Delete> The arguments you **ARE NOT** making are not sensitive to the fact that the kind of equivalence I am drawing for AND and OR
Should be
> The arguments you **ARE** making are not sensitive to the fact that the kind of equivalence I am drawing for AND and OR
I really must be tired. I often seem to say precisely the opposite of what I intend. But what I say is equivalent under bit-flipping!
Hi DM,
DeleteI forgot to address this earlier:
>We just define the set of having 5 indistinguishable objects
Two sets have the same number of elements if and only if they can be brought into a one-to-one correspondence with one another. Consequently, any set has five objects if it can be brought into one-to-one correspondence with a set that has five objects, such as {1,2,3,4,5}.
Now, call the indistinguishable objects of your putative five-element set e. Any function between the set {1,2,3,4,5} and your set takes any element of the former to e. Consequently, there can be no bijection between these sets: there is no inverse function.
In fact, for any two sets A and B where there exists an injection from A to B, but none from B to A, |A| < |B|, if |x| denotes cardinality. Since for every set with more than one element, there exists no injection to your set, we see that the cardinality of your set is one---it has only one element.
>For you, the semantics of 1 and 0 are important.
If by semantics, you mean their identity as greatest and least elements of B2, then yes---it is important, since that's where AND and OR are defined. If you mean whether I call either of them true or false, 1 or 0, a or b, then no, that's not important.
The crucial point is that AND is a function defined as taking elements from the two-element Boolean algebra; anything else, you're not talking about the logical AND. In particular, if you think about 1 and 0 as symbols, akin to variables, then you're not talking about AND, but about some function defined on that set of symbols.
The domain and codomain of a function is part of its definition---there's no way around that.
In fact, I like your example regarding the quadratic function. I agree that for x^2, it's immaterial if I call it x^2, y^2, a^2, or whatever else. But that's not what you're doing.
The function x^2 can equivalently be defined by an (infinite) table of values. Some of these values are, e.g., (1,1), (2,4), (0.5, 0.25), and so on. That's the form in which we usually give functions like AND and OR, simply because it's easy to exhaustively list all values.
Now, you could implement some mapping on the table of values of the function x^2---this is what you're doing regarding the mapping of 1 to 0. Say, for instance, you map all of the function values to their value +1, to yield, regarding the examples, (1,2), (2,5), (0.5, 1.25). This is then the value table of a different function---x^2+1.
So what you're doing is not the analogue of changing a variable name; you're changing the values in the definition of the function.
By analogy, we can express the functions AND and OR in an algebraic way. AND is x*y, OR is x+y+x*y (where all operations are understood modulo 2). You can obviously rename these all you want---you can write AND as a*b, or y*x, or even Karl*Lenny. Same goes for OR. But you can't write AND as x+y+x*y---that's a different function (over the two-element Boolean algebra).
>Whether these bits are the minimum or maximum element is just another instance of that.
DeleteSo no, it's not. The function AND is defined as taking two instances of the maximal element to one instance; it doesn't take one instance of the maximum and one of the minimum to the maximum. 'Maximum element' is not a variable, it denotes a unique mathematical entity; you can't change that without changing the function, anymore than you can change the entries of a table of values for a function over the reals and still have the same function.
>If what level of detail one includes in one's structural model is arbitrary, then there is no one right way to do it, and so I can't be wrong in ignoring a detail that you consider to be important.
That doesn't follow: just because there's no unique right way to do it, doesn't mean there are no wrong ways.
>I often seem to say precisely the opposite of what I intend. But what I say is equivalent under bit-flipping!
And yet, you have to correct yourself, since it's something different you end up saying. QED! ;)
Hi Jochen,
ReplyDelete> Now, call the indistinguishable objects of your putative five-element set e
That is not how I conceive of indistinguishable objects.
What I mean is more like {A,B,C,D,E}. Their labels are not properties of the objects, they are just the names by which we refer to them. I could represent the same structure as {F,G,H,I,J}. What it means for them to be indistinguishable is that they are truly interchangeable. I can switch the labels of any two of these objects and I will have the same structure. I can swap any two of these objects and I will have the same structure.
But the set of numbers {1,2,3} is not the same structure as the set of numbers {4,5,6} because numbers have properties which distinguish them from each other.
> If by semantics, you mean their identity as greatest and least elements of B2, then yes
OK, well for the structure I care about, this is not important. We are just looking at two different levels of analysis, as I said.
> The crucial point is that AND is a function defined as taking elements from the two-element Boolean algebra;
I accept that. But what I'm saying is that I don't care if a computational system implements AND or OR. I'm not the one who insists on interpreting it as either AND or OR, you are, so it is perfectly reasonable for me to disregard things that are crucial to the definition of AND. If I interpret something as implementing AND, it is only because this is a convenient way to describe the structure I care about. But I don't care that it is actually AND. I only care that it is a function of the form f, which is defined not in terms of 0 and 1 but A and B, where A and B have no intrinsic properties but for the roles they play in function f. A is not the minimum value and B is not the maximum value.
>then you're not talking about AND, but about some function defined on that set of symbols.
Yes, yes, a thousand times yes! By jove, I think you've got it! This is what I have been saying. I'm not talking about AND. I'm talking about function f, which is the function defined by the truth table of AND when interpreted as a function of abstract, meaningless, interchangeable symbols.
> I agree that for x^2, it's immaterial if I call it x^2, y^2, a^2, or whatever else. But that's not what you're doing.
That is exactly what I'm doing. It is not what you are doing. We are doing different things. We are conceiving of the computation differently. For me, it is a manipulation of meaningless binary (in the sense that there are two of them) symbols. For you, it is a manipulation of binary digits.
> That doesn't follow: just because there's no unique right way to do it, doesn't mean there are no wrong ways.
OK, but you have been insisting on your interpretation over mine. That certainly looks like you think there is a best way to look at it. So if you instead think there are many ways to interpret, and that yours just happens to be a correct way, while mine happens to be an incorrect way, then you had better justify that.
Hi DM,
Delete>That is not how I conceive of indistinguishable objects.
What I mean is more like {A,B,C,D,E}.
Well, you can't really say that some things are indistinguishable---incapable of being distinguished---and then proceed to distinguish them by different labels. Either they are indistinguishable, or not---can't have your cake and eat it.
However, even if that were a legitimate move, it wouldn't get you anywhere---since A=B=C=D=E, I can use, say, A wherever I can use B, thus for any putative bijection f, if say f(1)=A and f(2)=B, then f(2)=A, and we're back at square one.
>But the set of numbers {1,2,3} is not the same structure as the set of numbers {4,5,6}
It's slightly ironic that you should claim that, given that you say the set {(00,0), (01,0), (10,0), (11,1)} is the same as {(00,0), (01,1), (10,1), (11,1)}.
>OK, well for the structure I care about, this is not important.
Then, you're not caring about the structure of the function AND, since it's necessary for its definition.
>I'm not the one who insists on interpreting it as either AND or OR, you are
Yes---because the question is whether a system computes AND or OR. You're basically saying, it computes one of those---but this is the answer to a different question.
>Yes, yes, a thousand times yes! By jove, I think you've got it! This is what I have been saying. I'm not talking about AND.
OK, but then, why do you seem to think it has any relevance to whether a system computes AND?
>That is exactly what I'm doing.
Not according to the example you gave with the quadratic function. You're mapping different tables of values---defining different functions---to one another; you're not renaming variables.
>OK, but you have been insisting on your interpretation over mine.
Well yes, of course---because the question was whether the system computes AND or OR. And neither AND nor OR are described by the structures you've been considering. AND can be described by, e.g., its table of values, the function x*y, the set {(00,0), (01,0), (10,0), (11,1)}---all of those are valid structural descriptions of the logical conjunction. What you're proposing is not; the set {(AA,A), (AB,A), (BA,A), (BB,B)}, where A and B can either be the least or greatest element of B2, does not describe logical conjunction; it does not describe any particular computation at all, just as f(x)=x^2+b, where b can take arbitrary values, does not describe any unique function, but rather, a one-parameter family of functions.
Just as the fact that two functions fall within the same family does not mean that these two functions are the same, the fact that both AND and OR fall within the one-parameter family (since B=1-A) of functions you propose doesn't establish that they're the same.
Moreover, AND and OR can clearly be implemented, to the exclusion of the other, in a physical system---if I interpret h and l to be 1 and 0, e.g., I can implement the logical AND---but *not* the logical OR.
If you don't make any such interpretation, then yes, all I can say is that my system implements some function of the form {(AA,A), (AB,A), (BA,A), (BB,B)}---but that's only because I haven't completed the assignment of logical states to physical ones necessary to implement a particular computation.
Hi Jochen,
Delete> and then proceed to distinguish them by different labels
I explained that the labels are not part of the objects. It's a notational device. I can't distinguish any of the objects from any other by its own properties. If you gave me one of these objects, without telling me which one it is, I would have no way of knowing which one it is. Anyway, this is a digression, so never mind.
> since A=B=C=D=E
No, I am trying to say they are indistinguishable, not the same object. I may have said identical at one point, but I have since tried to stay away from that, as identical can mean that they are literally the same object.
> It's slightly ironic that you should claim that, given that you say the set {(00,0), (01,0), (10,0), (11,1)} is the same as {(00,0), (01,1), (10,1), (11,1)}.
How many times do I have to explain that I'm regarding these as meaningless symbols and not as numbers? Regarded as numbers, these are different sets. Regarded as meaningless symbols, they are the same -- provided of course that the two sets are considered in isolation and not regarded as part of some larger structure such that the symbols in one are intended to correspond to the symbols in the other.
> Then, you're not caring about the structure of the function AND, since it's necessary for its definition.
Yes! That is what I'm saying! I don't care about AND qua AND. I care about AND qua a function of form f.
> the question is whether a system computes AND or OR.
Who says? That's the question you're interested in, for some reason. But my claim is that it is not the question a computationalist is interested in. My claim is that a computationalist cares what logical structure a system implements in terms of the manipulation of abstract, meaningless symbols, and not what logical structure a system implements in terms of the manipulation of binary digits.
> OK, but then, why do you seem to think it has any relevance to whether a system computes AND?
But I'm agreeing with you that there is no fact of the matter regarding whether a system is computing AND -- it is and it isn't, depending on your interpretation. But I'm saying that the pertinent function to a computationalist is not AND but f -- AND or OR interpreted as a manipulation of meaningless symbols.
> You're mapping different tables of values---defining different functions---to one another; you're not renaming variables.
That's because you interpret 0 and 1 as a value and I don't. I interpret it as a meaningless symbol, like the name of a variable. The only meaning 1 or 0 have to me when it comes to computationalism is that 1 is not 0 and 0 is not 1. I cannot say 1 is greater than 0. I cannot say 0 is less than 1. I cannot say 1 corresponds to truth or that 0 corresponds to falsity. They are as abstract as A and B to me.
> because the question was whether the system computes AND or OR.
And my answer is that this is not the question a computationalist cares about.
> does not describe logical conjunction; it does not describe any particular computation at all
That depends on what you mean by computation. Information is being processed. That is what computation means to me. Not that a function is being fed binary digits as input and spitting out binary digits as output.
> but rather, a one-parameter family of functions.
A one-parameter family of functions can be viewed as a function in its own right. That parameter can be an input.
Delete> Just as the fact that two functions fall within the same family does not mean that these two functions are the same
So, this is just a new thought that I'm exploring and was not how I originally conceived of it. But, following from what I said earlier, this means that you can regard them as implementing the same function but being fed different parameters.
Let's look at the original function f again. Don't confuse a,b (input labels) with A,B (values).
a | b | O
A | A | A
A | B | B
B | A | B
B | B | B
So we could extend this to add a further two inputs, giving values for A and B. Function f is now described as taking four parameters a, b, A, B, where a and b are the two input in the above table, and A and B indicate what corresponds to A and B in the table. We can pass anything at all into the function and the function is defined so long as both a and b share a value with either of A and B and as long as A is distinct from B. f(Chalmers, Dennett, Chalmers, Dennett) therefore yields Dennett. f(1,1,2,1) yields 1 and so on. f(1,2,3,4) is undefined, and f(1,1,2,2) is undefined.
On this schema, f(_,_,0,1) corresponds to OR while f(_,_,1,0) corresponds to AND. f(1,0,0,1) is the same as OR(1,0) while f(1,0,1,0) corresponds to AND(1,0).
If presented with a device which seems to compute AND when low voltage is 0 and high voltage is 1, and OR with the reverse mapping, then its interpretation as an AND gate or OR gate is open to interpretation, but that it implements function f(_,_,h,l) is not as questionable, i.e. it is reasonably interpreted as something that implements function f with the third and fourth inputs fixed to high and low, respectively.
So my view is that computationalism cares not that it implements AND or OR, but that it implements function f. This is all by way of going to extreme lengths to formalise and explain something I find perfectly intuitive and understandable. It seems to me absurd to go to such lengths, but perhaps it is necessary if you don't yet understand what I can mean by abstracting out the semantic values of 1 and 0 and regard them purely as symbols. It's the same point I've been making all along when I say I care about the contours of the function and not the values of the digits.
Hi DM,
Delete>If you gave me one of these objects, without telling me which one it is, I would have no way of knowing which one it is.
Which is perfectly sufficient for the argument I'm making: you don't know if the object you have is f(1) or f(2); consequently, no bijection exists.
>No, I am trying to say they are indistinguishable, not the same object.
Well, by the identity of indiscernibles, that's really the same, though. And furthermore, even if it weren't, if A, B, C, D and E are totally indistinguishable, then in every place I use A, I can use any of the others---in every sentence, every proposition, every formula. Again, that's enough for my argument (and besides, I wouldn't know what else equality could mean besides this substitutivity).
>How many times do I have to explain that I'm regarding these as meaningless symbols and not as numbers?
Exactly as many times as I have to explain to you that in doing so, you're committing an error, and throwing away structure intrinsic to the functions AND and OR.
>My claim is that a computationalist cares what logical structure a system implements in terms of the manipulation of abstract, meaningless symbols
I realize that this is your claim, but so far, the only justification for it you've produces is that AND and OR somehow have the same structure, and it's that structure a computationalist cares about---hence, me trying to point out to you, in those umpteen last posts, that AND and OR do have different structure, and you're simply forgetting about that structure that differentiates them in order to claim they have the same structure.
>That's because you interpret 0 and 1 as a value and I don't. I interpret it as a meaningless symbol, like the name of a variable.
And that's where you stop talking about AND and OR, or their structure, and instead focus on the structure of an entirely irrelevant function.
So, to again try and make this as clear as possible: a function is defined by its domain, its codomain, and some sort of rule to associate elements of the former with elements of the latter. This is a structure: it's two sets and a relation between them. Throwing away either element of this definition yields a different structure; and you're throwing away the definitions of domain and codomain.
So, AND and OR don't have the same structure, and consequently, if computationalists care about structure, they ought to care about whether it's AND or OR that's being implemented.
>A one-parameter family of functions can be viewed as a function in its own right. That parameter can be an input.
DeleteRight, yes! And that function is necessarily different in structure from any function within the family, since those contain no free parameter anymore.
That's I think borne out very clearly by your formalization of the 'structure' you think AND and OR have in common. I don't quite agree with it, however: you need to take into account that, to yield only AND or OR, A=~B; otherwise, I could just say A=B=0, and obtain something that's neither AND nor OR.
So one possibility for this would be the function f(x,y,a)=x*y+a*(x+y). For a=1, this yields the OR of x and y, and for a=0, it yields the AND. One could write down a truth table for this function, consisting of triplets of values for a, x, y yielding a single output value.
You could do the same thing for your function, listing all possible combinations of values for a, b, A, B, in a table with five columns, and however many rows you need to write in all possible values for the variables.
In either case, however, what's perfectly clear is that either of these functions has a table of values that's completely different from those for AND or OR---consequently, they're different structures.
>that it implements function f(_,_,h,l) is not as questionable
There really is no such function. That's a confusion between physical states (h and l) and logical states---a physical state is not a logical state, although each physical state can be represented by some logical state.
>So my view is that computationalism cares not that it implements AND or OR, but that it implements function f.
But what's your justification for that view? Earlier on, you said that it's somehow their common structure, but clearly, f has a very different structure from AND and OR.
Hi Jochen,
DeleteI don't have time to answer at length, so I'll stick to the most important point.
>Exactly as many times as I have to explain to you that in doing so, you're committing an error, and throwing away structure intrinsic to the functions AND and OR.
The logical structure of the AND function is an abstracted description of what an AND gate is doing, I assume you agree. You obviously also agree with me that there are many ways this structure could be abstracted, many levels of detail and ways we could describe what the AND gate is doing.
Where we may disagree is that I think there are different ways to describe and abstract what a function is doing, that we can make choices about what to deem important about a function and what to deem as irrelevant. For instance, since you haven't brought it up, I take it that you take it to be irrelevant in which order we represent the inputs to a truth table -- this may not matter for AND or OR but it would matter for an IMPLIES (a.k.a. NOT_A_OR_B) gate.
For instance, for me, the arithmetic operators PLUS and MULITPLIED_BY share some structure, in that they each take two inputs and produce an output from the broadest of the domains of the inputs (i.e., if the two inputs are integers, the output will be an integer, if one is a real, then the output will be a real). That's not to say they are the same function, but viewed at a certain level of abstraction, they can be said to have the same structure.
This is what I mean when I say AND and OR share structure. I am not saying that they are the same function, but I am saying that what they have in common -- that they are both f-functions -- is what is important for a computationalist, and what divides them is irrelevant.
Hi DM,
Delete>The logical structure of the AND function is an abstracted description of what an AND gate is doing, I assume you agree.
Completely. And that description includes domain, codomain, and association rule---otherwise, you simply haven't taken into account the full abstract structure.
> For instance, since you haven't brought it up, I take it that you take it to be irrelevant in which order we represent the inputs to a truth table
I take it to be irrelevant wherever it is irrelevant---i.e. for functions like AND and OR, where it doesn't make a difference, but not for functions like logical implication, etc. You need to take as much structure into account as suffices for the description of a function---no less.
>the arithmetic operators PLUS and MULITPLIED_BY share some structure
Agreed. They do share some structure, but there is further structure that differentiates one from the other. If you just 'forget about' that structure, then yes, they will look the same---but then, you've merely used an incomplete definition of those operators.
Basically, at some level of coarse-graining, virtually everything blurs into one; but that doesn't tell you anything about the identity of objects---merely that you're not using a sufficiently fine-grained structure to differentiate them.
I mean, you can clearly see this at work with the functions we both proposed above. At the most fine-grained level, we have AND and OR, functions defined on B2 via their behavior if taking its elements as input---and perfectly different functions, at that level.
Then, you can 'delete' a bit of structure---you forget, essentially, whether 1 or 0 is the top element of B2. You can then recover either of the original functions AND or OR by taking that information as input, of my function f(x,y,a)---the value of the 'a'-bit essentially identifies whether 1 or 0 is the top element: if a=1, then the function implements the OR of the inputs x and y, meaning that 0 is the top element, while if a=0, we get the end, and 1 is the top.
You can delete further structure, and obtain an ever-wider class of functions where you have to supply ever more information to re-obtain AND or OR. For instance, as you did, you can forget that A=~B, or that a and b are elements of B2, and so on---and then, again supply the appropriate information to the now generalized function, to re-obtain the original ones.
Of course, this game can be played ad infinitum. For instance, we can take the truth table of my function f(x,y,a), and again forget whether 1 or 0 is the top element---treat the entries in its truth table, as you want to do, as abstract symbols---, and add in a fourth input, yielding f(x,y,a,b), which produces different functions of x and y for each choice of a and b.
The problem is that there is no obvious point at which to stop. If, as you say, computationalists somehow don't care about the structure lost in going from AND and OR to the function f(x,y,a), then why should they care about the structure lost in the next step, or the following?
I think the only consistent option is to care about the full structure of a given computation---that is, care about whether the function being implemented is AND or OR. Everything else just leads to absurdity. Besides, so far, I don't see any reason why computationalists ought to tolerate the loss of any structure at all---certainly, the natural position would be to take the full structure into account, and not some arbitrarily pruned one.
Hi Jochen,
Delete> You need to take as much structure into account as suffices for the description of a function---no less.
OK. But, say, if I had an "implies" gate, where each input is treated differently, that gate could be interpreted as a NOT_A_OR_B gate or as an A_OR_NOT_B gate. Those are two different functions, depending on which input we designate as A and which input we designate as B. Does this strike you as a meaningful difference in interpretation, of the same kind you perceive in AND/OR where the voltages can be interpreted as true or false, or does this seem to be a more trivial difference that ought not trouble a computationalist? It seems really trivial to me but I'd be interested to see what you think.
> The problem is that there is no obvious point at which to stop.
Neither is there an obvious point at which to start. You can get even less abstract than AND/OR if you want to. You can regard the absolute arithmetic relationshp of the voltage values as being important if you want to -- perhaps an input signal is a little stronger than an output signal because of energy wasted by the AND gate.
So I don't think that's a problem. We agree that there are lots of ways to characterise what an AND gate or an OR gate is doing. You could go pointlessly fine-grained or you could go pointlessly coarse-grained. We are very very close on what kind of grain interests us, I'm just a hair coarser than you.
I am claiming that to a computationalist, the important structure is the one we get when we delete a bit of structure - when "you forget, essentially, whether 1 or 0 is the top element of B2". As you say, you could go farther, deleting more structure, but I don't think a computationalist would want to go much farther (they might want to delete the information about the ordering of inputs and outputs, so that an A_OR_NOT_B computation is equivalent to a NOT_A_OR_B computation).
The computationalist view is that there is *some* mapping of what a conscious physical computation is doing that captures the abstract essence of what is important for consciousness. A computationalist will want to discard all structure that seems arbitrary and unimportant -- such as whether a signal is interpreted as true or false. A computationalist will want to preserve all structure that is required to explain the pattern of how information is being processed and propagating through the system. So there is a sweet spot, an appropriate level of description, and I think you're not quite there if you think the binary value of the signals need to be interpreted as numbers, because their value as numbers plays no role in the computation -- the computation can be understood just as well if described in terms of A's and B's.
Hi DM,
ReplyDeletefirst of all, I'm happy to see that we seem to finally be making some progress---it seems you now accept that AND and OR are bona fide distinct structures, even though you're still arguing that it doesn't matter if we forget about what differentiates them, where earlier on, you seemed to argue that they just flat out have the same structure. So let's see if we can build on that.
>It seems really trivial to me but I'd be interested to see what you think.
So as I said earlier, you need to take into account the full structure of a given computation in order to implement it physically. This, for functions not symmetric in their arguments---i.e. where f(x,y)=/=f(y,x)---includes the identity of the inputs: whether a implies b, or b implies a, are simply different questions. Thus, forgetting about which input is which yields, again, a different computation---if you wanted to know whether a implies b, and you get an answer that 'either a implies b or b implies a', you haven't managed to compute an answer to your question, because you haven't instantiated the right computation.
>Neither is there an obvious point at which to start.
There is, though. There is always a minimum amount of structure you need to take into account for each computation---so, if you have an abstract five-state FSA, you need minimally five distinguishable physical states to implement it. You can do this with a system that has a great many more different physical states, but those differences then simply don't map to distinctions in the logical space of the computation---so effectively, if you have six physical states, then at least two map to the same logical state, and so on.
This harkens back to the argument I made regarding whether it's important to have the representation of some computation's output be represented as white text on black, or the other way around---it is important if, and only if, that distinction maps to a distinction in the logical space, i.e. if it 'means' something different for the output to be in white on black, or black on white.
However, what you can't do is represent the five-state FSA with four physical states---then, you necessarily loose structure of the computation, and effectively instantiate a different one, that can be generated from the original by identifying two states of the five-state FSA. This is what you're proposing to do, if you say that we can forget about the difference between 1 and 0.
Now, it may be, in some special case, that the four-state FSA implements the same function as the five-state one; and thus, that functional properties remain invariant under this change. In this case, it might likewise be that performing this change on a computation instantiating a conscious mind does not yield any appreciable difference. But certainly, in the general case, this is false, and forgetting about structure yields a computation that differs from the one you set out to instantiate---and in particular, may no longer correspond to the same mind, or any mind at all.
So there really isn't a possibility to go 'pointlessly fine-grained': there exists a sort of minimum resolution at which you can ensure that the system implements the computation you care about.
>but I don't think a computationalist would want to go much farther
DeleteThe problem is that even this is simply too arbitrary. I could also go and delete some other structure. For instance, I could take AND and OR be defined as AND(x,y)=x*y, and OR(x,y)=x*y+x+y, and then change the set from which x and y are drawn. (Whereas what you're doing is essentially changing the definition of '*' and '+', which are, to get the full structure, taken to be the operations on the binary field, i.e. modulo 2.)
So, for instance, I can allow elements to range over the set {0,1,2}---relaxing the structural constraint that says that they must be taken from {0,1}. Or I could leave them to range over the natural numbers. And so on.
Or, I could make any other adjustment in structure. I could relax the lattice structure of B2. I could change just the codomain. And so on...
The problem now is that we have no way, a priori, to say which of these changes---if any---leave whatever we should care about invariant (or at least, I don't believe we have such a way). But the same holds for your proposed change of structure---we clearly get different functions upon your transformation, and what mind is implemented may depend on what function is implemented.
>A computationalist will want to discard all structure that seems arbitrary and unimportant
The problem is, then, that there really is no such arbitrary and unimportant structure---at least, none that can be singled out a priori. AND and OR correspond to different computations, and in general, one should care about which computation is implemented; thus, forgetting about structure that eliminates the difference between computations is potentially disastrous, identifying things that are eo ipso different. At the extreme, we may end up identifying mind and no-mind, just as we identify AND and 'no-AND', i.e. OR.
So in the end, there is one unique and guaranteed to be safe way to implement a computation---which is, to exactly mirror its structure physically. Everything else is predicated on pre-theoretical prejudice, such as 'since AND and OR look so similar, forgetting about their difference should be OK'. Such intuitions are notoriously unreliable---and since one can make do without them, it's not hard to see that that's what one should do.
Hi Jochen,
DeleteWhether two things have the same structure depends on what level of analysis you are applying. I still think AND and OR have the same structure, given what I deem to be relevant about AND and OR. But I am happy to accept that they have distinct structures given what you deem to be relevant about AND and OR. So I don't think I am changing my position so much as we are both understanding each other better. But that is also a kind of progress.
I'll set aside the issue of which input we designate as A and which input we designate as B. I'll just reassert that this seems really trivial to me and it would not seem plausible to me to attack computationalism on the grounds that an interpretation might assume a certain ordering of inputs.
> There is always a minimum amount of structure you need to take into account for each computation
I think you miss my point. I was saying there was no *maximum* amount of structure. You said there is no obivous point at which to stop (deleting detail, getting courser grained). I said there was no point at which to start (deleting detail, getting courser grained). As soon as you interpret a physical AND gate as implementing the AND function, you are throwing away a massive amount of detail, such as the values of the voltages, the materials it is made of, etc. You quite reasonably deem all this to be irrelevant to the function you care about, and you throw all this detail away. I am just throwing away some extra detail, although, interestingly, not detail that was there in the physical AND gate, but detail from the model you constructed to represent the operation of the physical AND gate. The detail I am throwing away (whether values correspond to the number 1 or the number 0) wasn't even there to begin with, it was introduced by you. All the more reason to dispose of it.
> it is important if, and only if, that distinction maps to a distinction in the logical space
Right, so whether or not it matters is up to you as you interpret it. A computationalist would not typically regard it as important either way, but you could go too fine-grained in your interpretation and include this in your model of what the system is doing if you were inclined to. I think that's what you're doing when you consider the values 1 and 0 to be intrinsic to the computation.
> if you have an abstract five-state FSA, you need minimally five distinguishable physical states to implement it
I agree with this but I don't see how it does anything to defend you from the accusation that you are too fine-grained. For a start, it seems to be a disanalogy, because we're not discussing the problem of trying to design a physical system to implement a given abstract structure, we are discussing the opposite problem of trying to reverse engineer a physical system to discover what computation it was intended to perform. Going too fine-grained would therefore be like being presented with the system intended to implement the five state FSA and interpreting it as a billion state FSA, as you would do if you started caring about irrelevant details like what the temperature of the device was or what the precise values of the voltages were or what have you. Going too coarse-grained is also something you could do if you interpreted it as a one state machine, as you would if you decided not to care about any of the differences intended to distinguish between the five states.
You can care about too much detail or you can care about too little detail. Or you could do what you are doing and care about detail that isn't even in the original physical system.
> The problem is that even this is simply too arbitrary.
DeleteWe are agreed that there is no canonical way to interpret a system as performing a computation. That's the pixies argument. We're focusing on the bit-flipping argument here, and the crux of that is whether a computationalist should care whether a value is mapped to a 1 or a 0. I really don't see any reason at all why they should, because (a) this information is not part of the physical system we are interpreting (b) this information serves no real purpose in interpreting the system. We can interpret it just as well with meaningless symbols.
The only advantage we have with 1 or 0 (actually, not 1 and 0 at all, but TRUE and FALSE) is that we can map it to ordinary folk concepts such as "and" and "or", which makes our interpretations more intuitive but no more powerful than if they were defined in terms of the like of f-functions.
> So, for instance, I can allow elements to range over the set {0,1,2}
Sure you could, but why would you? If there is no physical state we would associate with 2, then 2 can play no role in the production of consciousness, and so we can discard it. If computationalists prefer to discard irrelevant detail, they would surely discard this. They would say that because this is an irrelevant expansion of the interpretation, this is not a correct interpretation. Again, we may agree that there is no canonical interpretation, but this is the pixies argument again.
> The problem now is that we have no way, a priori, to say which of these changes---if any---leave whatever we should care about invariant
I'll say again -- the computationalist will want to discard any information that does not seem to be useful in explaining or modelling the behaviour of the system and no more. Whether values map to 1 or 0 is discardable because any abstract symbols will do -- we don't need numbers.
That's not quite as ironclad as the computationalist would need to defeat the pixies argument, but neither is it entirely arbitrary. There are good, defensible reasons for discarding some information while keeping other information.
> The problem is, then, that there really is no such arbitrary and unimportant structure---at least, none that can be singled out a priori.
But of course there is! The problem is you're starting from the wrong place. You're starting from the AND function, defined in terms of binary digits, and saying there's no way to know what to discard. But that's not the starting point. The starting point is a physical AND gate, and as soon as you start interpreting it as computing the AND function you've thrown away a massive amount of structure.
Hi DM,
Delete>Whether two things have the same structure depends on what level of analysis you are applying.
That's true if you're talking about the abstract structure of some concrete physical system, but we're talking about a specific structure, which has the name 'AND'. AND is a function, and functions can be rewritten as relations on the Cartesian product of their domain and codomain; and of course, relations are structures.
So AND is the relation {{00,0},{01,0},{10,0},{11,1}}; any other structure isn't AND. You can then talk about certain structures you derive from AND, e.g. under deletion of the difference between 0 and 1, and so on, but fundamentally, AND simply is the name of a certain structure, defined on a set consisting of elements of B2.
The same goes for the case of switching input labels for the implication: relations are defined as ordered tupels, so changing the ordering changes the relation, i.e. the structure.
>I was saying there was no *maximum* amount of structure.
Maybe I expressed myself badly. I meant to say that there is a minimum of structure that a physical system needs to possess in order to implement AND, i.e. the particular relation above. For any other structure, that extends that of AND in various ways, you'd need a higher minimum structure in the physical system---structure which, for implementing AND, is irrelevant.
So, again, there is a unique amount of structure that needs to be taken into consideration to implement AND---to whit, you'd need four different input states, and two different output states. Anything extending that is simply irrelevant; anything reducing that implements a different structure, obtained by modding out some relation (such as the one distinguishing 1 and 0).
So there is a unique point at which to stop, in your words, adding structure: when you've got enough to implement AND. In the other direction, no such cut-off exists (except, of course, when you run out of structure). So there's no 'going too fine-grained'.
>not detail that was there in the physical AND gate
No, but detail that's there in the abstract definition of AND---after all, showing that the physical system doesn't uniquely pin down the structure of the computation is the whole point of this ordeal.
>I agree with this but I don't see how it does anything to defend you from the accusation that you are too fine-grained.
As I said, there's no 'too fine-grained'---other than, perhaps, in the case where there's a simpler logical structure implementing the same function; and even then, I think it's questionable. But there is a 'too coarse-grained', and basically, you're too coarse-grained if relations present in the abstract model of the computation you want to implement are not mirrored in the physical system.
>we are discussing the opposite problem of trying to reverse engineer a physical system to discover what computation it was intended to perform
Right, and the question here is if it is capable of supporting enough structure to mirror that of the computation. A four-state physical system isn't capable of instantiating a five-state FSA, so we know it can't instantiate that automaton.
>(a) this information is not part of the physical system we are interpreting
DeleteExactly---that's the point: it is part of the abstract structure of the computation, but not in any unique way part of the physical system. So whether we implement AND or OR---what computation we perform---depends on the distinction between 1 and 0, but there's nothing intrinsic to the system that mirrors this distinction.
>(b) this information serves no real purpose in interpreting the system
On the contrary, it's necessary information: without this distinction, which computation is instantiated is underdetermined.
>Whether values map to 1 or 0 is discardable because any abstract symbols will do -- we don't need numbers.
No, but we need the abstract structure of the computation---for instance, encapsulated in the relation (or set of relations) 'For all x in B2, 0<=x<=1'. Without that structure, we're not talking about AND anymore; and since mind, on a computationalist view, is basically equivalent to what is computed, if the question of what is being computed has no unique answer (i.e. if we can't say whether it's AND or OR), then computationalism is in trouble.
>You're starting from the AND function, defined in terms of binary digits, and saying there's no way to know what to discard.
Yes, I'm indeed starting from the question of which computation is being performed---since again, on the computationalist view, that computation is equivalent to a mind.
>The starting point is a physical AND gate
This gets things backwards: trying to find an abstract model of a physical system, rather than trying to instantiate an abstract structure within a physical system. The former is basically finding a computational model of a brain; while the latter is the question of determining what the brain computes, i.e. on computationalism, what mind it gives rise to (if any).
Hi Jochen,
DeleteReally interesting stuff. It seems we are still worlds apart. We're coming at this from completely different angles and I can scarcely begin to understand where you're coming from.
For some reason, you conceive of the issue as trying to build a physical system to implement the function AND, whereas I conceive of the issue as trying to figure out what function a physical system is computing. From your point of view, AND is the fixed point, but from my point of view, the physical system is the fixed point. As such, from your point of view, the values of the binary digits are an intrinsic part of the problem (being intrinsic to the definition of AND), whereas from my point of view, the values of the binary digits are irrelevant, because there are no binary digits in a physical system. As such, I end up with an interpretation that dispenses with the values of the binary digits, and this seems acceptable to me, but you regard this as a failure because we're no longer talking about implementing the function AND.
What I don't understand is why you suppose that computationalists care whether we implement the function AND?
My detailed response follows.
> That's true if you're talking about the abstract structure of some concrete physical system, but we're talking about a specific structure, which has the name 'AND'.
DeleteNo we're not. We're talking about the abstract structure of a physical system. Remember that this began as a criticism of computationalism based on the idea that we can't give any justification for interpreting an AND gate (a physical system) as implementing the function AND (an abstract structure) rather than as implementing the function OR. Your argument that the AND function is the AND function has no bearing on this original criticism. What matters instead is whether it is possible to describe what the physical is doing using *some* abstract structure that is relatively justifiable.
Besides, I think it is perfectly legitimate to regard two distinct abstract objects as having some structure in common. That's all I'm doing when I say they *have* the same structure (at a certain level of analysis). I'm no longer saying they are precisely the same structure. The structure they have in common is the structure I care about. I don't care about how their structure relates to numbers such as one or zero.
This is not because I have changed my position but because I am now playing your language game -- I have adopted the view that AND is defined in terms of operations over the binary digits. I originally conceived of AND as defined in terms of operations over meaningless symbols.
There is not one right way to conceive of AND. The original concept of AND was not as a function over the binary digits at all but as a function over the set {TRUE,FALSE}. Neither of these is greater than the other, so you can't say for instance that TRUE is the maximum of the set. TRUE =/= 1 and FALSE =/= 0. TRUE is not a number or a digit but a concept which is related to the folk concept of truth and the philosophical concept developed in frameworks such as the correspondence theory of truth. TRUE and 1 are different concepts that we equate for the sake of convenience. The fact that you think AND can be defined in terms of {1,0} sometimes and {TRUE,FALSE} at other times shows that even you are flexible on these interpretations, so you should not be surprised that my own original interpretation (in terms of purely abstract meaningless symbols) is different yet again.
> Maybe I expressed myself badly. I meant to say that there is a minimum of structure that a physical system needs to possess in order to implement AND
Right, but this has no bearing on the argument I was making, which is that there is no reason to assume that the AND function is privileged as your starting point when interpreting what a gate is doing. You can get pointlessly fine grained by choosing a more detailed model as your starting point. I agree with the point you are making here but I don't see its relevance to our disagreement or how it helps your case at all.
> So there is a unique point at which to stop, in your words, adding structure: when you've got enough to implement AND
DeleteYou've got this backwards. We're reverse-engineering. The structure we're free to add to or remove from is not to the physical system which is fixed but to our abstract model of the physical system. We can indeed get too fine grained because we can choose to model irrelevant aspects of the physical system. You seem to be coming from the other point of view where AND is the fixed point and we're trying to find a matching physical system. But the whole point I'm making is that your conception of AND is not an appropriate model for what an AND gate is doing -- a better and more parsimonious model would be one which discards any meaning associated with the symbols 1 or 0.
> No, but detail that's there in the abstract definition of AND
So why do you choose to interpret it as AND rather than as a structure that doesn't introduce any extraneous detail?
> and basically, you're too coarse-grained if relations present in the abstract model of the computation you want to implement
Again, backwards. I don't want to implement AND. I want to parsimoniously interpret an AND gate as performing some abstract computation or function. I don't interpret it as computing AND but as computing f -- my version of AND that doesn't care about the meaning of 1 or 0.
> Right, and the question here is if it is capable of supporting enough structure to mirror that of the computation
What is "the computation"? It computes whatever structure we reasonably interpret it to compute. Why are you assuming it is computing your AND function? Your whole argument is based on the point that it is not reasonably or uniquely interpreted to compute the AND function, so why interpret it so when you can throw away a bit of irrelevant detail and interpret it as computing f instead?
> On the contrary, it's necessary information: without this distinction, which computation is instantiated is underdetermined.
Only if you assume computation must be interpreted as an operation over binary digits. I see no basis for this assumption. It can be interpreted as an operation over meaningless symbols.
> Yes, I'm indeed starting from the question of which computation is being performed---
But who says a computation is a manipulation of binary digits? Certainly there are no binary digits in a human brain. A computation is rather a manipulation of information, not conceived of as strings of binary digits each of which have meaning. All that matters is that there are patterns in terms of differences and similarities. It doesn't matter what the value of an isolated bit is.
> The former is basically finding a computational model of a brain; while the latter is the question of determining what the brain computes,
I don't understand where you're coming from on this. Finding a computational model of the brain is the same as determining what the brain computes. Both of these are what we do when we try to find an abstract model of a physical system. The latter problem, trying to instantiate an abstract structure within a physical system, is the problem of building a new realisation of a mind (especially on a new substrate) once you have determined what it is that a brain computes.
Hi DM,
Delete>For some reason, you conceive of the issue as trying to build a physical system to implement the function AND, whereas I conceive of the issue as trying to figure out what function a physical system is computing.
That's not quite how I'd put it. Rather, my aim is more akin to trying to decide whether a given physical system implements a given computation. So, say there's a set, S, of computations (which, to me, is the same as the set of computable---partial recursive---functions; though I feel you may differ here).
Within S, there are computations performing different functions (this is horribly put, but I'll trust you get the gist---computations computing different things, or something like that). Some, say, compute the determinant of a matrix. Some sort their inputs. Some add two binary inputs. Some compute the AND of two binary inputs.
And, if computationalism is right, some compute minds. So let's say C_M is a computation yielding a mind. The question is now: for a given physical system P, does P implement C_M?
Now, clearly, even to the most optimistic computationalist, this question is unanswerable: we don't know C_M. However, there's a question that is logically prior, and that, I believe, we can answer: for any computation C in S, can we decide whether P implements C?
If there is a negative answer to this question, then we can't decide, likewise, whether P implements C_M. So, my angle of attack is to take two eo ipso different computations---in my original example, the adder and the schmadder---and show that we have equal justification to claim that P instantiates one, as we do for the other.
So that's why I start with the computation: it's in a sense always possible to find *some* computation that is instantiated by a physical system, but that question misses the point---in order to answer the question of computationalism, we have to be able to decide whether a given computation---a formal structure---is implemented by P.
In this sense, both AND and OR are computations in S---call them C_& and C_v for short. These are different, distinguishable elements of S---one computes the AND of its inputs, one computes the OR. If there is no fact of the matter regarding which of these computations is instantiated by P, then there is, in general, no fact of the matter which computation is instantiated by P.
Nevertheless, we can use P to compute the AND or the OR; consequently, if it's not an objective fact of the matter which of these P instantiates, then it's in fact the user which supplies, in a sense, the missing structure. But if that's the case---if we need an intentional user to 'pin down' the reference relation between physical states of P and logical states of C---then computation will not serve as an underpinning for mind, as we depend on the mind's intentional faculties to render the question of what is being computed decidable: we end in circularity.
Consequently, the difference between AND and OR---as elements of S---is meaningful, and eliding it really only means considering a different element C_? of S, with the property that both C_& and C_v can be mapped to C_? upon deleting some structure---namely, the difference between top and bottom element. This mapping is many-to-one, and consequently, loses information---knowing only C_?, neither C_& nor C_v are uniquely determined: which is the point I am making.
DeleteSo, from my point of view, starting out with the physical system makes no sense: you can always find a structure instantiated by that system; but that doesn't mean you know what the system computes. Knowing that structure doesn't yield any new information, since it's inherent in the physical system from the start---so producing a computation simulating the system doesn't answer what the system computes; the question gets kicked up another rung, to the question of what the simulated system computes. (If you have a model of the system producing a high-voltage output if and only if both its inputs are on high voltage, then you still don't know if it computes AND or OR.)
>because there are no binary digits in a physical system
There are, of course, no formal entities at all in a physical system. What a system does if it only transforms high-voltage inputs into a high-voltage output is not a computation; only if you map this to some formal object, some element of S, does it become any computation at all.
>What I don't understand is why you suppose that computationalists care whether we implement the function AND?
So, I think the answer to this question should now be clear: because they clearly need to care about what computation is instantiated at all by a physical system, if they want to even possibly be able to answer the question of whether it's C_M.
>We're talking about the abstract structure of a physical system.
Again, to me, this is getting it backwards: the question is whether we can justifiably say that P implements C_&, not whether we can find some abstract structure C_P that P supports. The latter is vacuously true; the former asks a genuine question of how computation is to be understood.
>Besides, I think it is perfectly legitimate to regard two distinct abstract objects as having some structure in common.
Of course: this is borne out by the fact that we can map both C_& and C_v to C_?. But in doing so, we loose the identifying features of C_& and C_v. Think about John and Jim, who are identical twins, except John always wears a red shirt, and Jim always wears a green one. On a black-and-white photograph, they'll look the same; but that simply means that there are no structural differences between them except the color of their shirts, which we've just erased. To talk about John or Jim alone, we need to talk about the color of their shirts; otherwise, we're talking about some new abstract object, [John or Jim], which is different from both John and Jim.
>The original concept of AND was not as a function over the binary digits at all but as a function over the set {TRUE,FALSE}.
DeleteThat's not a difference, though, other than in notation, i.e. in symbols used to refer to the abstract entities denoted by 'top element of B2' and 'bottom element of B2'---i.e. it's not a difference in the same sense as referring to Jim as 'Jimbo' and referring to John as 'John-Boy'. The reason for this is that the elements of GF(2), i.e. the two-element field, fulfill the same relations as the logical constants TRUE and FALSE, i.e. whenever a formula evaluates logically as true, the corresponding algebraic expression evaluates to 1, and likewise for 0 and false. So, no structure is lost in substituting one for the other.
>I agree with the point you are making here but I don't see its relevance to our disagreement or how it helps your case at all.
Again, I start with the question of whether P implements C_&; any structure extraneous to that question is simply irrelevant. The point is that even at the level of structure where one would claim that P implements C_&, an equally as justified claim can be made that P implements C_v.
>So why do you choose to interpret it as AND rather than as a structure that doesn't introduce any extraneous detail?
Because of use: given P, I can indeed compute the AND. However, the point is that this is not unique: I can equally well compute the OR. This point persists, even if you take into account additional layers of structure: then, at whatever level you choose to claim that P implements some C_P, I am equally justified, by a similar manipulation as that which substitutes C_v for C_& as 'what P computes', to conclude that it in fact computes C_P'. So if you (or some hypothetical computationalist) comes to claim P computes C_M (which relates to a mind the same way C_& relates to AND), I can point out that one can equally well hold that P computes C_M' (which may or may not yield a mind, or a different mind).
>I want to parsimoniously interpret an AND gate as performing some abstract computation or function.
You can do this; but it doesn't answer the relevant question, which is, indeed, whether P computes C_& (respectively, whether P computes C_M).
>Why are you assuming it is computing your AND function?
I'm not assuming this; I'm questioning whether we can interpret P as reasonably implementing any computation at all, and using AND (or the adder) as an example where it's as reasonable to interpret it as implementing one computation as it is to interpret it as implementing another, different one.
>Only if you assume computation must be interpreted as an operation over binary digits.
Not *any* computation, no; but the computation of AND, certainly. For other computations, the same trick works, since each computation has some alphabet on which it operates (in a book I'm reading at the moment, it's called the 'universe', or (in German) 'Individuenbereich', which literally means 'domain of individuals' and I guess is best translated as 'universe of discourse'---which is B2 in the case of AND). 'Meaningless symbols' then simply would be a different universe of discourse, and consequently, a different computation.
>All that matters is that there are patterns in terms of differences and similarities.
Note, however, that any pattern of differences and similarities at all can always be translated into a binary pattern---that's what makes binary so powerful.
>Finding a computational model of the brain is the same as determining what the brain computes.
No. One tells you, e.g., what brain state you will obtain if the brain is initialized in such-and-such a state and left to run for such-and-such a time; the other tells you what those brain states mean---what is being computed.
In case it's not clear, I means: "it's not a difference in the same sense as referring to Jim as 'Jimbo' and referring to John as 'John-Boy' *is not a difference*", above.
DeleteHi Jochen,
DeleteI follow the general type of argument you are trying make and I agree that it is a legitimate approach. I agree that if you can show that there is no best way to interpret what computation P is performing, then computationalism clearly doesn't work (unless, I guess, it turns out that all these equivalent interpretations produce the same mind, which I think might be a plausible response, but let's not get into that).
Where I think you go wrong is that instead you're basically arguing that there are two equally justifiable unreasonable ways to interpret what computation P is performing. That's not enough to prove your case. You need to show that the best, most parsimonious interpretation is non-unique.
As soon as you start insisting that AND is as fine-grained as you can go, you lose me. You seem to be starting from the assumption that the system is supposed to implement AND or OR rather than keeping an open mind. By modelling the temperature of the system, as I have suggested, you can get more fine-grained than AND. Or you could get more fine-grained in another sense by interpreting the system as computing a relation between members of the Beatles, where high voltage represents John Lennon and low voltage represents Paul McCartney. In addition to having different values, you would be bringing in structure outside the system, such as the fact that Paul McCartney is alive and John Lennon is dead (which to me is analogous to what you do when you assume it is important that 1 is greater than 0).
I think the point where we depart is that you at times appear to be implicitly assuming that we ought to interpret P as computing some operation defined on binary digits, where it is important that 1 is greater than 0 (and at other times you do not appear to be making that assumption -- I'll get to that later). I don't make that assumption. For me, a computation is a standalone mathematical object that imports nothing from outside the system. It doesn't matter that the symbols might also commonly represent values that have numerical relationships to each other, because those numerical relationships play no role in the system.
On my conception of what a computation is, where symbols are valueless and meaningless apart from that they differ from each other, there is no difference between the computation an OR gate is performing and the computation an AND gate is performing.
> Nevertheless, we can use P to compute the AND or the OR; consequently, if it's not an objective fact of the matter which of these P instantiates
Where a system can be interpreted equally reasonably to implement either of two abstract structures, I agree that it doesn't make sense to say that it is objectively computing one of these structures.
But that doesn't by itself rule out that there is a best interpretation. If it is possible to discard the differences between such structures and yet retain some structure they share, then it may be that the system is best interpreted as computing that structure instead, particularly if what is discarded does not actually correspond to anything in P. In the case of AND and OR, this means we are left with something like your computation C_? (or just how I think of AND/OR, i.e. as a function on meaningless symbols). So the interpretation of the AND gate as computing C_? is more reasonable than the interpretation of the AND gate as computing C_v or C_&.
> -is meaningful, and eliding it really only means considering a different element C_?
DeleteIt's clear that you're not assuming, as I had originally supposed, that S consists of operations on binary digits. In that case I just go back to my argument that the best interpretation of what a physical system is computing is the most parsimonious one. It is more parsimonious to interpret a system as computing C_? than C_v or C_&, since C_? is simpler than C_v but any system that can be interpreted as computing C_v can trivially be interpreted as computing C_? and vice versa. The structure you add when you deem the value of the signals as binary digits to be important is unneeded to explain the system and so should not form any part of a parsimonious interpretation. It's like interpreting the signals as corresponding to members of the Beatles. A computationalist can plausibly hold that the unique computation a physical system is instantiating is just the most parsimonious one -- the simplest one that yet manages to capture how it processes information and how it behaves. The computationalist would therefore reject that the system is computing C_v or C_& and instead regard the system as computing C_?.
Not that I particularly endorse this view. It seems too subjective. But it does deal with the bit-flipping argument in my mind. This specific way of interpreting a system to be performing more than one computation is less impressive to me than the general criticism that one can go out of one's way coming up with convoluted unparsimonious interpretations. If you find the bit-flipping argument especially persuasive, then you are (in my mind) implicitly assuming that parsimony matters, but then the bit-flipping argument falls apart because any interpretation that relies on a certain mapping of signals to logical values is not the most parsimonious interpretation one could make.
> That's not a difference, though, other than in notation
That seems to me to be a case of wanting to have your cake and eat it too. Of course I agree with you here, but if you want to say that it is important that 1 > 0, then you must deem a function over {TRUE,FALSE} to be different than a function over {0,1}, because it is not the case that TRUE > FALSE. Neither is it the case that TRUE is odd or FALSE is even. These are properties of numbers, and TRUE and FALSE are not numbers. If we can use 1 as a stand-in for TRUE and 0 as a stand-in for false, then as long as you don't care about the roles of 1 and 0 as numbers it is also fair to use 0 as a stand-in for TRUE and 1 as a stand-in for FALSE.
Now, of course, you can at this point say that AND is defined not in terms of 1 and 0 but in terms of TRUE and FALSe, and 1 and 0 just represent TRUE and FALSE. And then we would have the same problem. AND would not be OR, because TRUE has meaning in addition to the fact that it not the same value as FALSE. It for instance has connotations of correspondence and consistency, wheareas FALSE has the opposite connotations. But you can't say that TRUE is the maximum value of B2, and this is what you singled out as what was important to you. My point is that you obviously have a flexible interpretation about what the values for AND and OR are in terms of what differences between them are salient so you should not be surprised that my interpretation is different from yours.
> The reason for this is that the elements of GF(2), i.e. the two-element field, fulfill the same relations as the logical constants TRUE and FALSE, i.e. whenever a formula evaluates logically as true, the corresponding algebraic expression evaluates to 1, and likewise for 0 and false.
DeleteBut the same holds in the meaningless values used in C_?. You choose what you want to correspond to what, and you can get the corresponding evaluations.
> Again, I start with the question of whether P implements C_&;
Again, this is not a reasonable starting point. You should start by asking what is the structure that best captures P's behaviour. This is not C_& but C_?. For your argument to work, you would have to show that there is more than one structure which best captures P's behaviour. C_& is as good as C_v, but neither are as good as C_? and I would suggest that nothing is.
> So if you (or some hypothetical computationalist) comes to claim P computes C_M
C_M will be like C_? in that it will not care about the value of the signals -- it will be the most parsimonious interpretation of what P is doing. C_M' will therefore be the same as C_M. This is because (a) there are no binary digits in a human brain and (b) you can implement the same high level functions (e.g. storing and retrieving information) with opposite signals and gates. On functionalism, the difference couldn't possibly matter. All that matters is that the same functions are implemented.
Hi DM,
Deletefirst, regarding something I said earlier:
>'Meaningless symbols' then simply would be a different universe of discourse, and consequently, a different computation.
The more I think about it, the less certain I am of this. I mean, in what sense is 'computation over meaningless symbols' really computation at all? When you enter meaningless symbols into a machine, and you get meaningless symbols out, then what has been computed? You'd be just as well off with the original meaningless symbols.
And certainly, the way computation was originally conceived, it wasn't an operation on meaningless symbols---rather, it was meant to emulate a mathematician doing calculations, proving theorems, and the like. So while all manipulation occurs according to syntactic rules, the symbols will typically have a well-defined meaning; otherwise, there's really no 'calculation' or 'theorem-proving' going on at all. But anyway.
>Where I think you go wrong is that instead you're basically arguing that there are two equally justifiable unreasonable ways
Unreasonable in what sense? I mean, both the adder and the AND-gate are built with performing a specific computation in mind---adding or taking the AND. To claim it's then unreasonable to interpret them as adding or taking the AND seems a bit strange to me.
Indeed, my aim was to be as charitable as possible: I let a hypothetical computationalist specify a physical system, together with the computation they take it to perform---which basically grants a plethora of challenging interpretational tasks up front: finding the right level of description, deciding which differences are meaningful and which aren't, and so on. All of that, I'll give the computationalist for free, because it still turns out to be the case that given this best possible interpretation, the question of which computation is being performed doesn't have a unique answer. The AND, respectively adder, is simply an example to show how the general argument works.
>As soon as you start insisting that AND is as fine-grained as you can go
I'm not saying that: I'm saying that if we want to decide whether a system implements the AND, then there's a maximum depth of fine-graining we need to take into account. Such a maximum depth will exist no matter the computation, but AND (or my example, the adder) provides a simple enough example to block familiar tricks, such as the addition of enough complexity to render matters opaque.
Again, we don't start by considering a physical system, and then try and find some computation it performs; rather, we want to decide, given a computation, whether the system implements it. Because that's the task we'll eventually be faced with regarding computationalism: C_M is the computation producing a mind; does P implement it? Showing that, in general, no such question of this form can be answered shows that this one in particular can't be answered. Consequently, there's no objective fact of the matter regarding P implementing C_M.
>For me, a computation is a standalone mathematical object that imports nothing from outside the system.
DeleteAnd it's the same for me; you're just chopping bits of the system and calling them 'outside of it'. A computation, any function, and so on, always includes its universe; without it, you've simply got a kind of unspecific class of functions.
>there is no difference between the computation an OR gate is performing and the computation an AND gate is performing.
Which ought to tell you that your conception of computation is insufficient to capture the notion of computation in full---since AND and OR clearly are different computations; after all, they're different mathematical objects. The functions AND(x,y)=x*y and OR(x,y)=x*y+x+y aren't the same; saying they are makes, to me, as much sense as saying 2 and 3 are the same.
>So the interpretation of the AND gate as computing C_? is more reasonable
Well, but nobody has ever interpreted an AND-gate as computing C_?; so this is a pretty harsh indictment regarding the reasonableness of computer scientists on your part.
Anyway, it's rather clear that the trick I've used regarding AND and OR can always be used---even for an alphabet that isn't the binary numbers, I can either formulate an equivalent replacement, or code the alphabet's symbols into binary numbers, and do the replacement there.
>C_? is simpler than C_v but any system that can be interpreted as computing C_v can trivially be interpreted as computing C_? and vice versa
That's again getting the question the wrong way round: we want to be able to decide whether a physical system P implements a given element of S, not whether there is an element of S that we can associate with P. Again, the latter is vacuous, and tells us at most something about your ability to construct mappings, but nothing about computationalism.
>A computationalist can plausibly hold that the unique computation a physical system is instantiating is just the most parsimonious one
But this doesn't answer the question at hand. We want to know which physical systems instantiate minds---that is, the computation C_M (or some member of a set of computations S_M).
>if you want to say that it is important that 1 > 0, then you must deem a function over {TRUE,FALSE} to be different than a function over {0,1}
No. The set {0,1} models the relationship between TRUE and FALSE: the ordering relation (<=) on B2 yields the implication between TRUE and FALSE. It's not an accident that x<=y for x,y from B2 if (and only if) an implication between a proposition having truth value x and one having truth value y yields TRUE (i.e. 0<=0, 0<=1, 1<=1, but of course not 1<=0). Furthermore, the complement ~x=1-x in B2 yields negation; which, together with the implication, gives us a universal set of Boolean functions.
So, in particular, we can consider the AND to be ~(x-->~y), and the OR to be ~x-->y. Since this then yields the outcomes we'd expect if 1 was TRUE and 0 was FALSE, we see that the structure of B2 directly models the relations between truth and falsity (and maybe also understand a little better why AND and OR without this structure don't make sense).
>it is not the case that TRUE > FALSE. Neither is it the case that TRUE is odd or FALSE is even.
But that's just because you only translate half of the statement into the appropriate language---like translating all the nouns in a sentence, without bothering with the verbs. So that 0<=1 just means that a false proposition implies a true one, that 1 is the multiplicative identity just means that A AND TRUE = A, and so on. Regarding odd and even, that concept doesn't really make sense in a field which doesn't include the number 2, I think.
Really, it's no different from writing a text in English, or writing it in German---you can just translate between both contexts.
>But you can't say that TRUE is the maximum value of B2
DeleteSo I hope it's clear now that yes, I can, and do. Heck, wikipedia comes right out with "Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false", for what it's worth.
>But the same holds in the meaningless values used in C_?.
No; not until, at least, you've made up your mind whether A<=B or B<=A, that is, until you've essentially made it into a copy of B2 with different names for the elements. At that point, I'll agree that you have an equivalent formulation; but that's merely due to reinstating the structure you deleted.
>For your argument to work, you would have to show that there is more than one structure which best captures P's behaviour.
No, that would be a completely different argument than the one I'm making.
>All that matters is that the same functions are implemented.
Consequently, it matters whether AND or OR are implemented. C_?, really, does not implement any function.
>C_?, really, does not implement any function.
DeleteOr perhaps, rather, if it does, then again, I can play the same trick. So suppose that C_? is a function f: [A,B]^1-->[A,B], defined by f(x,y)=A if x=y=A, and f(x,y)=B else.
Then, either A=/=B, and consequently, f=/=f', where f' is obtained from f by switching A and B (i.e. switching whether h maps to A or B). Or, A=B, and consequently, f=f'', where f'' is the function taking all inputs to A. Either way, I end up with a different computation, C_?' or C_?'', that's as different from C_? as AND is from OR; all that we've lost is merely whether A<=B or B<=A.
Naturally, I can play this trick for any function that takes inputs and outputs from some given alphabet X: simply take an arbitrary permutation of the elements of X, and map that onto X. Whenever a computationalist claims that P implements C_P, I can construct a C_P' that it implements just as well; in the particular example of AND, we may assume that AND is what the computationalist has claimed is C_P, and OR is the C_P' I can construct.
Otherwise, I simply don't know what it means for A and B (or the elements of X) to be 'meaningless'. It seems to me you kind of want them to be different, while not being different at all; but what that precisely may entail isn't clear to me.
Hi Jochen,
Delete> You'd be just as well off with the original meaningless symbols.
Typically, you get a different pattern as output than you put in as input. These different patterns can have different causal consequences, e.g. driving an I/O device or a robot. It's the patterns that are important, not the values of individual bits considered in isolation.
> So while all manipulation occurs according to syntactic rules, the symbols will typically have a well-defined meaning;
That meaning comes from outside the system, from the mathematician or the programmer or someone interpreting the operation of a computer. It's not plausible (I'm sure you would agree -- Searle certainly thinks so) that the meaning of primitive signals could be inherent to a physical system. So any sensible computationalist will not claim that this meaning plays any role in cognition.
> Unreasonable in what sense?
Unreasonable because there is nothing inherent in the system that gives any basis for interpreting a signal as TRUE rather than FALSE. There is no reason to think that such an interpretation could be the best description of what the system is computing.
> I mean, both the adder and the AND-gate are built with performing a specific computation in mind
Intentions in the mind of the designer cannot plausibly be considered to be part of what determines what computation a physical system is actually computing.
> To claim it's then unreasonable to interpret them as adding or taking the AND seems a bit strange to me.
Well, OK, it's not unreasonable to describe them in such terms for pragmatic reasons, as this is intuitive and useful. But it is unreasonable to think that such an interpretation could be considered objectively true, because there's nothing in the system to recommend such an interpretation over the bit-flipped one.
> I'm not saying that: I'm saying that if we want to decide whether a system implements the AND,
But this is the wrong question to ask. You should ask what is the system doing, not whether it implements AND.
> Again, we don't start by considering a physical system, and then try and find some computation it performs; rather, we want to decide, given a computation, whether the system implements it.
Precisely wrong. Other way around.
> Because that's the task we'll eventually be faced with regarding computationalism: C_M is the computation producing a mind;
No. A brain is performing some computation that realises a mind. Let's call it C_M. Now, from reverse-engineering how a brain processes information, what is C_M? C_M will be more like C_? than C_&. It will not care about the truth-interpretation of signals.
> Showing that, in general, no such question of this form can be answered shows that this one in particular can't be answered
You haven't shown that. You have shown that there is no basis on which you could pick between C_v and C_&. But you haven't made that point in general. C_? doesn't have the same problem. There is at least one interpretation which describes what the system is doing in detail but which doesn't care about bit-flipping. That is the correct interpretation.
> Consequently, there's no objective fact of the matter regarding P implementing C_M.
There is no objective fact of the matter regarding whether a system implements C_v or C_&. Your argument shows nothing about whether it implements C_?. C_M is like C_? and not like C_v or C_&.
> since AND and OR clearly are different computations; after all, they're different mathematical objects.
They are different when taken together. They are the same when taken in isolation and the signals not taken to have inherent meaning. As soon as you use arithmetic operations such as addition and multiplication, you are interpreting the signals as numbers, and so you are taking them to have inherent meaning. Of course they are different functions when you do that. I never said they weren't.
Will get back to you on the rest probably tomorrow.
DeleteHi DM,
Deleteperhaps let me get in a couple of notes so as to try and get us somewhat more onto the same page (or at least, let us better triangulate our differences).
>These different patterns can have different causal consequences, e.g. driving an I/O device or a robot.
I would say that this is the exact opposite of meaningless---after all, what's the meaning of a signal other than what it causes the recipient to do? If you give somebody a slip of paper with some scribbles on it, and they consistently engage in the same actions upon receiving it, and vary those actions based on variations in those scribbles, then I'd say that's a good criterion of them having understood the meaning of the message (in fact, I've defended this view in print).
So, for instance, if the voltage level received by a servo motor causes it to spin a particular direction with a particular speed, then that voltage level, to that motor, is an instruction for doing precisely that.
Otherwise, you're really running a danger of diluting the meaning of computation so far as to have it be useless---because otherwise, how is not simply any physical state change a computation? Any state of a physical system is a 'meaningless symbol', thus any process transforming states is one taking in meaningless symbols and producing them as output.
But this trivializes the notion of computation, and makes computationalism collapse to physicalism---computationalism is the thesis that a physical system implements a mind by performing a computation, so if every physical system performs some computation, we can just leave the notion of computation out altogether---it doesn't add any information. We just have physical systems evolving their respective ways, some of which, we now claim, produce minds.
>But it is unreasonable to think that such an interpretation could be considered objectively true,
In that case, you're really saying that none of our computers ever compute what we think they compute (if even something as basic as an AND-gate doesn't compute the AND-function). I don't think your typical computationalist would want to go along with that.
>You should ask what is the system doing, not whether it implements AND.
I really don't see your issue here. Computationalism claims that brains carry out a specific computation, C_M, that gives rise to a mind. This claim carries a commitment to the thesis that there's an objective fact of the matter regarding what a system computes. So I take the analogous claim that an AND-gate computes the AND-function, or that a half-adder adds binary digits, and show that this thesis can't be upheld.
I'll respond to the rest once you've found time to reply to my other points.
Hi Jochen,
DeleteI was unaware that implication maps so neatly onto <=. That is interesting.
I agree with you that as long as you translate all the terms appropriately, then TRUE,FALSE maps to 1,0. But you can choose a different translation to map it to 0,1, right? Instead of using implication (~AvB) you could use the inverse function (A&~B) and get the same kind of relationship. There's no reason to think that implication is any more natural a mapping of <= than A&~B, or to map <= to implication rather than some other relationship such as >. So, if what you care about is TRUE and FALSE, then it really doesn't matter that 1 is the maximal number of B2. Whatever number you used to represent TRUE, you could find a mapping that would work.
> Heck, wikipedia comes right out with "Boolean algebra is the branch of algebra in which the values of the variables are the truth values true and false", for what it's worth.
That seems right. I don't get why you would think this disagrees with anything I have said.
> No; not until, at least, you've made up your mind whether A<=B or B<=A,
But I don't need >= or <=. Those are concepts from the world of numbers, not concepts from the world of true and false. You can map true or false to <= and >= if you want to, but you could map true or false to anything. For instance, I could map TRUE to McCartney and FALSE to Lennon and I could map implication to the function wasNotBornAfter. Or I could map FALSE to McCartney and TRUE to Lennon and map implication to the function wasNotBornBefore. You can always construct such mappings if you want to, but you don't need to. The mappings in terms of functions that are meaningful only outside the system are entirely extraneous (although they might be useful for certain pragmatic purposes, e.g. using a mapping of boolean functions to arithmetic functions in order to use existing well-known theorems in arithmetic to show something about Boolean logic). It doesn't really matter for my mapping of TRUE and FALSE to Beatles members who was born first. The tables are all I need.
> I would say that this is the exact opposite of meaningless---after all, what's the meaning of a signal other than what it causes the recipient to do?
DeleteYou miss my point. My point is that the signal has no intrinsic meaning taken in isolation. It doesn't matter if a single bit of a signal is TRUE or FALSE. I agree with you that the meaning of the aggregated inputs and outputs to a system is best understood in terms of its causal associations.
> then that voltage level, to that motor, is an instruction for doing precisely that.
Exactly. That is what the signal means. It does not mean, for instance TRUE or FALSE. And an 8 bit message sent to a computer means whatever it causes the computer to do. It does not mean, for instance (TRUE,FALSE,FALSE,TRUE,TRUE,TRUE,TRUE,FALSE).
> so if every physical system performs some computation, we can just leave the notion of computation out altogether
No we can't, because on computationalism only some computations are minds. So it matters what computation the system is performing, and so systems need to be analysed as computations to determine if they are realising minds or not. Or at least, this is how we would go about it if we knew what C_M looked like.
> In that case, you're really saying that none of our computers ever compute what we think they compute
I'm adopting your hard distinction between AND and OR for the sake of argument. I don't think most computationalists would, on being presented with these subtle arguments at least. In my way of thinking, AND and OR are equivalent and are effectively the same. So, in my language, and in the language of computationalists, an AND gate really is computing AND. It is also computing OR. These are just two different ways of looking at it, and they are essentially the same function.
It's only when I adopt the hard distinction you do and accept for sake of argument that it matters for AND and OR what the numerical value of the digits are that I say that an AND gate doesn't compute AND. So my apparent deviation from what computationalists would normally say is explained by my adopting your language for the sake of conversation and is not in fact a deviation at all, I suspect.
> I really don't see your issue here.
This is equally frustrating for me.
OK, which of these statements do you accept?
1. It is possible to elide any consideration of intrinsic meaning or value from the symbols in a truth table and have a mathematical structure that represents the common patterns in input and output shared by AND and OR but that nevertheless preserves the difference between distinct values within that structure.
2. If we try to interpret what an AND gate is doing in terms of such a value-agnostic pattern, there is a unique way to interpret it as such without any ambiguity
3. The same kind of value-agnosticisation trick can be performed with any computation over binary digits.
4. The claim of computationalism is that the realisation of a complex information processing flow creates a mind
5. A value-agnostic pattern is an information processing flow
6. (I say that) the claim of computationalism is that the information processing flow that realises a mind is a value-agnostic pattern.
> This claim carries a commitment to the thesis that there's an objective fact of the matter regarding what a system computes.
Right, but the computation being realised by a system is not any old computation you want to map it to. It must be the best interpretation. Value-laden AND and OR are not the best interpretations, any more than functions over the Beatles would be the best interpretation. The best interpretation is a value-agnostic interpretation, because it captures what the system is doing but without ambiguity or arbitrary value-laden interpretation. There is an objective fact of the matter regarding what the system is computating and it is not AND or OR as you conceive of them but AND or OR conceived of value-agnostically.
Hi DM,
Delete>I was unaware that implication maps so neatly onto <=. That is interesting.
You can get the intuitive meaning by just considering a set of objects with different properties, which will then be subsets of that set. Say, you have a set of balls, which can either be red or blue, and wooden or stone. Then, <= is just subset inclusion: so if, for instance, the set of wooden balls W is a subset of the set of red balls R, i.e. W<=R, then the property 'wooden' implies the property 'red'. That's why the ordering relation is so important for logic.
>But you can choose a different translation to map it to 0,1, right?
Any such translation would amount to a renaming only---i.e. 0 gets all the structural properties of 1, and vice versa. So then, it would be true that 1<=0, etc. That's why I chose to rather refer to the top and bottom elements of B2, whether you call them 1 and 0 or the other way around.
>Those are concepts from the world of numbers, not concepts from the world of true and false.
Well, as I said, if you complete the translation, then you need a concept playing the same role---such as implication in the world of true and false.
For instance, if you have an operation '*' on the set of {A,B}, and you define, say, A*B=A and B*B=B, then you've defined B as multiplicative identity, and it'll inherit (suitably translated versions of) all the properties that 1 has. And it's those properties that carry the structure.
>For instance, I could map TRUE to McCartney and FALSE to Lennon and I could map implication to the function wasNotBornAfter.
In which case you'd have a Boolean algebra whose top element is named 'McCartney' and whose bottom element is 'Lennon', and where <= just means 'NotBornAfter'. You don't change anything about the structure with such renamings, but you do once you neglect the <= (or its equivalent).
>It doesn't really matter for my mapping of TRUE and FALSE to Beatles members who was born first. The tables are all I need.
Right, but those tables imply that there is a relation playing the role of <=/implication. If you have AND, and negation, then you have implication; and otherwise, you're simply talking about something else.
>It doesn't matter if a single bit of a signal is TRUE or FALSE.
Yes, because you're not doing a computation on those values. But if you compute the AND, then you are doing such a computation. The truth table of AND, say in terms of A and B:
I1 I2 | O
A A | A
A B | A
B A | A
B B | B,
together with the relation that A=~B (which is automatically fulfilled on the set {A,B}, if negation is just the complement), implies that B has all the properties of the top element of B2, and A has all the properties of the bottom element. In particular, it implies a lattice ordering relation <= such that x<=y if and only if (x,y)=(A,A),(A,B), or (B,B), i.e. if ~(x&~y)=B. You can't have one without the other.
>It does not mean, for instance TRUE or FALSE.
No, of course not; I've nowhere claimed that all computations operate on these values. AND, OR, etc., however, do; likewise, addition of binary digits operates on binary digits, and so on.
>No we can't, because on computationalism only some computations are minds.
DeleteAnd if all physical processes are computations, then this is the same as saying 'some physical processes are minds'. The idea of computationalism is that some physical processes are minds because they instantiate computations; if the set of computations and the set of physical processes are coextensive, specifying 'computation' does not specify anything that isn't already specified by 'physical'. To the extent that computationalism and physicalism are distinct theses, computation and physics are distinct notions.
>These are just two different ways of looking at it, and they are essentially the same function.
This is like saying a computer which computes Excel also computes schmexcel, and so on---so that's indeed saying that our computers never compute what we think they compute. When we use an adder to compute the sum of two digits, then we take that adder to compute the function of addition; but under your interpretation, that would be strictly false: it would compute a function corresponding to the equivalence class (x+y, x+y+1), just as you take an AND-gate to compute a function equal to the equivalence class (x*y, x*y+x+y). This is simply not how most people (including most computationalists) conceive of computation.
So let's take your enumeration of statements, and let me try and give you my perspective on each.
1. The truth table of AND (and the complementation relation) imply the 'meaning' of the symbols used, in as much as that meaning exhausts itself in 'top element of B2' and 'bottom element of B2'. You need nothing else.
2. The only way to do something that's, as you say, 'value agnostic' would be to forget something about the structure of AND---change it to a different computation.
3. Yes, I agree---you can always forget which computation you were specifically carrying out, to end up with a set of computations such that you need to again specify the deleted information to pick out the original computation, or indeed, pick out different computations upon entering different information.
4. The claim of computationalism is that there is a computation, C_M, an element of the set of all computations S, such that a brain gives rise to a mind by implementing C_M (and not, say, some equivalence class of computations produced by deleting structure from C_M).
5. As I said before, I'm no longer really certain that a 'value-agnostic pattern' corresponds to any well-defined information processing on its own. It's related to a computation in the same way that 'subject predicate object' is related to 'Harry likes art': knowing only the former, you lack the actual informational content of the sentence; it could just as well be 'Mary hates brussels sprouts'. A computation, as I would understand it, rather relates a certain meaningful input to a meaningful output---at least, that's how we use computations: I provide a computer with the parameters of star formation, say, and get out the distributions of stars in the galaxy. I don't use a computer to transform meaningless strings into other meaningless strings.
6. I think that this is really arguing for something very different than usual computationalism. But perhaps, I'm just not intuitive enough to grasp your notion of 'computation'. How would you define the computation C_? formally?
>Value-laden AND and OR are not the best interpretations
In that, you're basically saying that AND and OR are not computations, if indeed computations are only your 'value-agnostic patterns' and AND and OR are not such patterns. I think this is a significant departure from how just about everybody else understands computation, and basically means that nothing we actually use our computers to do really is computation.
Or perhaps, to put it another way: it may be a plausible thesis that computation, used the way I understand it, underlies mind---after all, in some sense, our minds operate on representations of the world, producing new representations from old ones, generating actions from these representations, and so on. So if there's such a representational aspect to computation, then I could see a computational theory of mind working. Certainly, this is the intention of some computational theories, such as Putnam's functionalism, and even more so for Fodor's semantic account.
DeleteBut on your conception, I have zero reason to believe that a 'value-agnostic pattern' corresponds to a mind, which is, or at least appears, clearly non-value agnostic. In a sense, you are, to me, claiming that all the content of a novel could be reduced to 'subject predicate object', or more complex versions thereof. I'm not saying that's intrinsically impossible, but it strikes me as being a thesis that's much harder to argue for than standard computationalism (and one I don't think is sufficiently well-posed to need rebuttal, as of yet).
Additionally, if you view computation entirely as 'value-agnostic patterns', then I don't see why Putnamesque arguments should worry you: after all, the pattern of meaningless symbols is the same in every Putnam-style mapping of computational states to physical states.
Hi Jochen,
Delete> if the set of computations and the set of physical processes are coextensive, specifying 'computation' does not specify anything that isn't already specified by 'physical'.
Yes it does. It specifies that what is important is the computational process carried out, not other aspects such as mass or volume or material it's made of or whatever. It means that if you implemented the same computational process on some other substrate, you'd get the same conscious experience.
> so that's indeed saying that our computers never compute what we think they compute
No, it means from the perspective I am putting forth, the distinction between excel and schmexel is no distinction at all. I'm suggesting that your interpretation of "excel", which cares about the truth-values of bits, is an incorrect interpretation of what "excel" really is.
> This is simply not how most people (including most computationalists) conceive of computation.
Perhaps. But for most purposes, the distinction between your interpretation and mine is immaterial. It is only important in this specific debate. If pressed with your arguments, my suspicion is that the computationalists would agree with me. Otherwise, they would probably not think about these issues at all.
> The truth table of AND (and the complementation relation) imply the 'meaning' of the symbols used, in as much as that meaning exhausts itself in 'top element of B2' and 'bottom element of B2'.
If all you have is symbols with no intrinsic meaning, then there is no difference between the truth table for AND and the truth table for OR. They are identical. They only differ in notation -- the symbols used to represent the value that you need two of in order to get that same result -- 1 in the case of AND and 0 in the case of or, and the value that you get out otherwise -- 0 in the case of AND, 1 in the case of OR. Seeing as the two truth tables are identical if interpreted without prior conceptions of meaning, I don't get how you can say that the truth table tells us which is the maximum and which is the minimum value. You could interpret it either way. The truth table is all there is -- there is no need to interpret one value as greater than the other.
> 2. The only way to do something that's, as you say, 'value agnostic' would be to forget something about the structure of AND---change it to a different computation.
Agreed. So my conception of AND/OR is less specific than your conception of AND. It is a different computation. Sure.
> I provide a computer with the parameters of star formation, say, and get out the distributions of stars in the galaxy.
DeleteBut that cannot be how brains work, because there is no programmer who is interpreting the input and output. The meaning has to be generated from within the system somehow. It cannot be imported from outside as in your example. If you want to argue that this computation-generated meaning is impossible (you can't get semantics from syntax, as Searle would say), that's a separate argument. But for purposes of the bit-flipping argument, the computationalist contention would be that the manipulation of intrinsically meaningless symbols somehow generates meaning. The meaning arises out of the patterns and how they causally relate to the outside world and not from any meaning intrinsic to the base symbols forming them.
> In that, you're basically saying that AND and OR are not computations
I'm saying that your way to conceive of AND and OR is not how a computationalist conceives of AND and OR. I would say AND is a computation, but what I mean when I say AND is not quite what you mean when you say AND.
> I think this is a significant departure from how just about everybody else understands computation
It's not a significant departure when you realise that the distinction between my AND and your AND is a subtle one that never makes any practical difference. It only matters in this debate. I think you'd get a similar or equivalent answer from any computationalist if you pressed them with the bit-flipping argument. They may put it differently -- they may say agree that two bit-flipped computations are distinct computations but insist that they give rise to the same conscious experience. This may seem different to what I'm saying but I think it's just the same intuition phrased differently -- it means that from their perspective the difference between two bit-flipped computations is irrelevant.
> How would you define the computation C_? formally?
I'm not necessarily so hot on formal definitions. You'd probably make a better job of it than I. Definitions don't usually need to clarify that their symbols are completely unbound and interchangeable. But, in natural language, C_? is a structure that operates on a set of two distinct symbols and that returns one symbol if the two inputs are that symbol and the other symbol in all other cases. You could define it formally perhaps by giving the truth table in terms of A and B. The only problem is that this could be misunderstood as being different from the A/B swapped version, even though the two are isomorphic and the swapping represents a change in notation only. I'm not sure how to express this in a formal definition if it isn't already clear.
> Certainly, this is the intention of some computational theories
I'm not saying that representationalism is false. I'm a representationalist. But I don't think that bits intrinsically represent true or false. I think that mental representations are patterns of inherently meaningless symbols that come to mean things because of how they interact with each other and the outside world. Meaning only emerges at higher levels.
ND/OR as I conceive of it means something. It means you can discriminate between patterns of inputs, returning one value if the two inputs equal that value and another value in all other cases. That's more abstract than AND or OR but it's not entirely meaningless. Whatever meaning there is has to come from the system and cannot be imported from the minds of people trying to interpret that system. As soon as you interpret an AND gate as interpreting AND specifically, you are importing meaning that wasn't there to begin with.
Delete> after all, the pattern of meaningless symbols is the same in every Putnam-style mapping of computational states to physical states.
But it isn't. AND/OR is not the same pattern as XOR/XNOR. I already gave the "meaning" of the former pattern. The latter pattern is that one output symbol indicates that the two inputs are equal, another symbol indicates that the two outputs are distinct. So XOR/XNOR is unambiguously a computation to detect if two values are the same. AND/OR is not -- it is instead unambiguously a computation to detect if both input symbols are a specific symbol.
Hi DM,
Delete> It specifies that what is important is the computational process carried out, not other aspects such as mass or volume or material it's made of or whatever.
OK, I'll accept the point.
>No, it means from the perspective I am putting forth, the distinction between excel and schmexel is no distinction at all.
Well, but in the way we use computers, it certainly makes a difference: we execute excel, as opposed to schmexcel; we compute the AND, and not the OR, of two inputs. If I have an AND-gate, I can input two binary values, and get the value of their conjunction out in response---in particular, if I didn't already know that value, I would then learn it. I don't see how such a thing could be possible if the computation were just your 'value-agnostic pattern', because then, I wouldn't learn the value of the AND of the two inputs. I might learn the value of the AND or OR, but that's not what I'm interested in; and picking out the value of AND would then necessitate me already knowing that value. So I don't see how one could use computers to generate new knowledge on your view, while that's something we do quite routinely.
>If all you have is symbols with no intrinsic meaning, then there is no difference between the truth table for AND and the truth table for OR.
I think here you're trying to have your cake, and eat it---you both want for the symbols A and B (or whichever) to be distinguishable, but for the functions f1 and f2, where e.g. f1(A,B)=B while f2(A,B)=A, to be the same.
>So my conception of AND/OR is less specific than your conception of AND. It is a different computation.
So, what would a system computing AND (and not OR) look like?
>It's not a significant departure when you realise that the distinction between my AND and your AND is a subtle one that never makes any practical difference.
See, I don't think that's right: if I have, say, a calculator, then I enter symbols 3 and 4, press '+', and get out 7. This is a different operation from entering 3 and 4, pressing '*', and getting out 12---yet on your conception, the 'value-agnostic pattern' is the same. So using computers the ordinary way is precisely not using them as value-agnostic: we *can* distinguish between different interpretations of the symbols, and get out the result of any concrete computation only if we do.
>You could define it formally perhaps by giving the truth table in terms of A and B.
Well, the only way I can see how to do that would yield a function such that f1(A,B)=A, while there's another function such that f2(A,B)=B. Which of these you then name AND and OR is immaterial---indeed, merely involves switching around whether A or B is the top element---but it's clear that they are different. And even more, I can, if you say that a system implements f1, point to my re-labeling trick to make it into f2.
So I don't really think a formal definition of your intuition exists---it's the same kind of 'same, but different' problem we had earlier on with the sets of indistinguishable elements: some elementary mathematical reasoning shows that such a thing simply doesn't exist.
>But I don't think that bits intrinsically represent true or false.
And neither do I. But the inputs of the AND-function, by virtue of that functions definition, do.
>As soon as you interpret an AND gate as interpreting AND specifically, you are importing meaning that wasn't there to begin with.
DeleteSo, are you saying that there is no system such that it computes AND? Then, what grounds do you have to believe that there is a system such that it computes C_M?
>AND/OR is not the same pattern as XOR/XNOR.
Only if you admit some meaning into your 'meaningless' symbols, such as, for instance, that the same symbol has the same meaning in all instantiations. Generally, that's not true: even in language, the same symbol may mean different things. So why should 'high voltage' over here carry the same meaning as 'high voltage' over there?
That's ultimately as arbitrary a choice as having 'high voltage' denote 1, instead of 0. After all, it would be perfectly intelligible to say that 'high voltage at I1' means A, while 'high voltage at I2' means B, and with that convention, we could carry out a computation not in the set {AND,OR}.
Why should that be forbidden?
Hi Jochen,
Delete> I might learn the value of the AND or OR, but that's not what I'm interested in
If you have preconceptions about what you're interested in, then you're importing meaning. That's fine. But that meaning isn't inherent in the system.
> So I don't see how one could use computers to generate new knowledge on your view
I'll just point out that we build computers with interpretations of inputs and outputs in mind. We bring our own meaning which allows us to relate those inputs and outputs to things outside the system. That imported meaning isn't inherent in the system, which is not the same as saying there is no meaning at all in the system.
That said, I think we could use computers to generate new knowledge even if they were designed by some sort of automated process, e.g. by evolution by selection, as long as there were some interpretation we could impose on the system that made the system useful to us.
> you both want for the symbols A and B (or whichever) to be distinguishable
Yes. Within the context of the system. These symbols are only meaningful within a particular truth table. It is an error to try to relate the symbols across different truth tables. f1(A,B)=A) is the same pattern as f2(B,A)=B. There is only a difference if you incorrectly draw analogies between the B in one context and the B in the other context. These contexts need to be viewed in isolation. The B in one is not the B of the other, unless such a correspondence yields isomorphic structures, and it doesn't. If what you care about is isomorphism rather than whatever symbol is used, then B in f1 corresponds to A in f2. to say that f1 is not the same function as f2 is like saying that triangle A=(1,1), B=(1,0), C=(0,0) is not the same as triangle B=(1,1), C=(1,0), A=(0,0). All that has changed is some labels, which have no meaning other than how they are used within the system.
Note that the example you gave is not quite right. f1(A,B)=B is not quite the same pattern as f2(A,B)=A, if the order of the inputs is deemed to matter.
> So, what would a system computing AND (and not OR) look like?
The computation AND (and not OR), i.e. your AND, is a purely abstract structure. It cannot be physically realised unambiguously. For it to be unambiguous it would need to exist in a world where 1 and 0 are physical things. Of course, once you adopt a convention, you can deem an AND gate to implement AND, and for all practical purposes it does. But as you have noted, that's not the same as there being an objective fact of the matter on whether it computes AND.
Delete> then I enter symbols 3 and 4, press '+', and get out 7
That's not quite right. You push buttons with pictures of symbols on them, and an output device draws a picture of symbols on a display. The meaning of those symbols is imported by you and not intrinsic to whatever computation is happening. The value-agnostic pattern I am talking about is just that the values of individual bits in the registers of the device have no intrinsic meaning. As long as the I/O devices were modified accordingly, all the bits could be flipped and the system would still behave in the same way. In your view, it would be performing a different computation, but in my view it would be performing the same computation.
If you bit flip the computation for 3+4=7, you don't get the computation for 3*4=12, as far as I can see. Multiplication isn't schmaddition, and a bit-flipped 3 is not 3. What computation you get depends on how many bits we use and whether we're using unsigned values or signed, whether we are using two's complement or one's complement.
Whatever -- my point is that whatever you do get is the same core value-agnostic computation in my view. There is nothing in the system which intrinsically identifies what you are doing as adding 3 to 4 to get 7. That's something we bring to it.
> we *can* distinguish between different interpretations of the symbols
Because the I/O level translates the meaningless symbols into symbols onto which we project meaning.
> the only way I can see how to do that would yield a function such that f1(A,B)=A, while there's another function such that f2(A,B)=B.
I'll assume you mean f2(B,A)=B because that is the corresponding row in the truth table for f2.
f1 and f2 are clearly isomorphic truth tables. You can only justify that they are different structures by assuming that the symbol A in one corresponds to the symbol A in the other. As soon as you do that, you are no longer considering the system of f1 in isolation or the system of f2 in isolation but the system f1+f2.
Perhaps my triangle example isn't great because triangle ABC is the same as triangle BCA or ACB or whatever. But take a figure made by tracing 4 points. By your logic, ABCD is not the same figure as ACBD. In the former, A is connected to B and D (assuming the figure is closed). In the latter, A is connected to C and D. But if we no longer assume that the A in one corresponds to the A in the other, then ABCD could indeed be the same figure as ACBD, if for instance B in the former picked out the point identified by C in the latter and C in the former picked out B in the latter.
DeleteYou're distracted by the symbols. The symbols are meaningless. Ignore them. They are there for our convenience only and play no part in the structure we're trying to communicate than do the labels we give to vertices in a triangle. All that matters is the pattern of similarities of differences. A is just a placeholder for "that symbol which is only output when it occurs twice in the inputs" and B is just a placeholder for "that symbol which is only output when it appears at least once in the inputs". This is like how A in a square might be a placeholder for "that vertex which is located at point (0,1)", such that ABCD might indeed be the same figure as ACBD as long as we try to match vertices by structural correspondences rather than by labelling correspondences.
> So, are you saying that there is no system such that it computes AND?
Not unambiguously, as you define AND.
> Then, what grounds do you have to believe that there is a system such that it computes C_M?
Because I take C_M to be a value-agnostic structure akin to f.
> Only if you admit some meaning into your 'meaningless' symbols, such as, for instance, that the same symbol has the same meaning in all instantiations
Yes. I've already acknowledged this. I'm assuming for the sake of argument that the objectively right computational interpretation of a system is the most parsimonious one, which more or less implies it is consistent. I agree with you that it is not plausible that parsimony is sufficient to justify an objective fact of the matter on which computation a physical system is implementing, but that's the pixies argument.
Hi DM,
Delete>If you have preconceptions about what you're interested in
We always do, whenever we compute---that's usually *why* we compute, in fact. I'm merely pointing out here that your notion of 'computation' is very different both from what's usually meant by the term, and from how specialists typically use it.
>It is an error to try to relate the symbols across different truth tables.
Well, you can't really introduce a structure and then demand to use only half of it. If there's a function f1(x,y), with x,y taken from {A,B}, there's a set {A,B}, and a set-theoretic complement ' such that A'=B. And then, there's a function f2(x,y)=(f1(x',y'))'. (In fact, this is just Newman's problem again, or a consequence of it.)
But since f1 is different from f2, and your 'value-agnostic pattern' doesn't fix which one is implemented, I can again point out that an ambiguity exists as to whether the 'AND-gate' implements one or the other.
>The computation AND (and not OR), i.e. your AND, is a purely abstract structure. It cannot be physically realised unambiguously.
This is interesting---after all, all computations we take us to perform aren't value-agnostic in your sense, so really, none of what we typically believe we compute corresponds to an instantiable computation in your sense.
Nevertheless, we can clearly instantiate these computations, depending on the associations of the user. But on your computationalism, the user itself is just a value-agnostic computation. So in some way, it must be possible to combine value-agnostic computations to yield a computation that's non-value-agnostic. So there should be some way to 'enlarge' the AND-gate by some other value-agnostic computation (say, the one corresponding to a mind) to yield something which computes the AND, in my sense. No?
>For it to be unambiguous it would need to exist in a world where 1 and 0 are physical things.
You should keep in mind that your A and B are not anymore physically real than 1 and 0 are. I think you're tempted to take high and low voltages as, in some sense, 'just being' those 'meaningless symbols'. But they're not: they're high and low voltage. Combining voltages is not computation; abstract formal manipulation is.
>The meaning of those symbols is imported by you and not intrinsic to whatever computation is happening.
The computation that's happening---as usually understood---is *addition*: over the Peano axioms, to get 3+4, you apply the successor function to 3 four times. On your value-agnostic view, such a computation is impossible, of course. That's why I think it's not a good formalization of computation.
>In your view, it would be performing a different computation, but in my view it would be performing the same computation.
Actually, that's not right: if you adjust the I/O devices appropriately, then also on my view, the same computation would be implemented (it's the same as adjusting the interpretive mapping in order to yield the same logical values as before the bit-flip). It's when you use a different interpretational mapping, without then again compensating for it, that a different computation is implemented---a device which gives me the answer to the question 'what is the AND of a and b?' then gives me the answer to the question 'what is the OR of a and b?'. Under an I/O adjustment, I'd get back to the original 'what is the AND of a and b?'.
>Multiplication isn't schmaddition, and a bit-flipped 3 is not 3.
Certainly, but I only limit myself to bit-flipping for computations that operate on (whose universe is) bit-values. For computations on natural numbers, I can use some function permuting numbers in order to change the interpretation, giving me considerably more freedom.
>my point is that whatever you do get is the same core value-agnostic computation in my view
DeleteBut how do we ever get the non-value-agnostic answer 7 for the addition of 3 and 4, if all that goes on are value-agnostic computations?
>I'll assume you mean f2(B,A)=B because that is the corresponding row in the truth table for f2.
I don't see the relevance: the function f2 yields, if evaluated on the pair (A,B), the outcome B, while the function f1 yields the outcome A. That's what makes them different functions.
>f1 and f2 are clearly isomorphic truth tables.
I don't know what you mean by an isomorphism between truth tables. Usually, an isomorphism (on sets that don't have any additional structure) is a bijection taking each element of a set to another, possibly different, one. So I could see an isomorphism on the set of truth tables (equivalently, on the set of functions from [0,1]^2-->[0,1]), but an isomorphism just between two truth tables doesn't make sense to me.
>you are no longer considering the system of f1 in isolation or the system of f2 in isolation
There's simply no such thing as 'the system of f1 in isolation', not in any way I know how to formulate, at least. If I have f1, then I can define f2 in terms of it---f1 already implies the structure of f2.
>By your logic, ABCD is not the same figure as ACBD.
And if A, B, C, and D are different points, they aren't.
>But if we no longer assume that the A in one corresponds to the A in the other,
Yes, well, and if we no longer assume that 'horse' means horse, it could mean buttercup. You set a convention with a formalization of the structure you propose; if you then use that formalization differently, you're simply doing something inconsistent.
>A is just a placeholder for "that symbol which is only output when it occurs twice in the inputs"
You can't really tell me both that 'A is meaningless' and that A means 'that symbol which is only output when it occurs twice'. The latter is as much an interpretation as saying 'the top element of B2'. And as such, I can again change it: interpret A as meaning 'that symbol which is output when it occurs at least once'. After all, there's nothing about high voltage that entails it meaning 'symbol which is only output...' anymore than there is entailing its meaning to be 'top element of B2'. As much as 0 and 1 aren't part of the physical world, so too aren't 'symbol which...'-type of interpretations.
>Because I take C_M to be a value-agnostic structure akin to f.
As I said, since our minds very much appear to have content, to have definite values, I think that's a significantly harder to justify proposition than standard computationalism.
> I'm assuming for the sake of argument that the objectively right computational interpretation of a system is the most parsimonious one
Appeal to parsimony is really only justified for physical theories: you need to choose one theory out of infinitely many possible ones explaining the observed data; then, the requirement of having this choice make unique predictions forces you to take the simplest one, as otherwise, from the observation that a stone drops when it's let go in a gravitational field, the prediction that it drops the next time under the same circumstances is as justified as the prediction that it grows wings and flies away.
So really, parsimony here is just a statement of prejudice---you like the 'simpler' interpretation better. Clearly, this doesn't do anything to foster any sort of 'objective' answer to the question of what a system computes. Indeed, I would be on equal grounds to assert that the objectively right computation is one acting on binary values, because I like those better: without parsimony, that the same symbol should have the same meaning in all instantiations is just as much of an arbitrary choice.
Hi Jochen,
Delete> We always do, whenever we compute---that's usually *why* we compute,
But a brain or an AI doesn't have preconceptions about what it computes. Whatever the core essence of the computation is or means or experiences, it has to come from within the system, it can't be predicated on your preconceptions. So I think you're just taking the wrong perspective on the problem.
> Well, you can't really introduce a structure and then demand to use only half of it.
It's far less than half of the structure you're talking about. There's a lot more than two possible truth tables formed from A and B. As soon as you introduce one, you've effectively "implied" a whole lot more -- an infinite family in fact if we allow more than two inputs.
But your AND gate is not implementing two or more distinct patterns, and certainly not infinitely many. It's only implementing one pattern of inputs to outputs, so we only need one truth table to describe its behaviour. It may be that there are other possible truth tables but they are irrelevant to describing what the AND gate is doing.
> But since f1 is different from f2
So f1 is different from f2 if and only if they are taken to be parts of the same structure. But the structure that models the AND gate has only f1, say. You can see that there are other logically possible patterns, but since they are not required to describe the AND gate they can be discarded from your model. The structure we use to describe a computation is not simply the set of all possible structures.
> If I have f1, then I can define f2 in terms of it---f1 already implies the structure of f2.
It doesn't matter if f1 implies f2. These are mathematical objects. On platonism, they both exist necessarily. You don't need f1 to define f2. There's nothing stopping you defining f2 or any other function you like. On platonism, f2 always exists. f1 always exists. But they are only distinct if you create a structure which has both f1 and f2 at the same time, so that you can say that the A in one is the same value as the A in the other. It is of course possible to have a structure that only has one of them. That doesn't mean the other one doesn't exist, so the fact that you could add the other function to your structure doesn't matter -- you don't have to.
> So in some way, it must be possible to combine value-agnostic computations to yield a computation that's non-value-agnostic.
Yes. I mean, my view is that meaning can come from the interactions of patterns of bits which are each individually meaningless. So I guess what I said is not strictly true. To realise your AND (not OR) unambiguously, you would have to create an AI which is thinking about your AND function the way you do -- and then its mental representation of AND is unambiguous to it -- but not necessarily to an external observer. The best an external observer could divine would be an association between this representation and what that AI reports as "true" and "false" or "1" and "0".
> You should keep in mind that your A and B are not anymore physically real than 1 and 0 are
DeleteI know. The difference between (A and B) and (1 and 0) is that A and B are intended to be without inherent meaning. I do away with ambiguity not by taking A and B to be physical but by taking A and B to be anything at all -- whatever they need to be to map the function.
> On your value-agnostic view, such a computation is impossible, of course.
It's possible in a practical, everyday sense, by importing meaning. But that is not the core essence of what is really going on from my hypothetical physicalist computationalist's perspective as he considers the bit-flipping argument. From that perspective, we only have a manipulation of valueless bit-analogues in such a pattern that it can be trivially interpreted as addition.
So I accept your view that what I am calling computation in this discussion is not quite how computation is thought of 99.99% of the time, where it is legitimate to import meaning from the mind of an interpreter. But all the same, I think that my approach is more or less how any reasonable physicalist computationalist would answer the bit-flipping argument. If you don't think my answer is representative of physicalist computationalism and is therefore uninteresting and irrelevant, we can drop this discussion and concentrate on the other discussion.
> For computations on natural numbers
I thought we were discussing the bit-flipping argument where we adopt a bit-level perspective. I'm not sure where you're going with this. Can we stick to a bit perspective?
> but an isomorphism just between two truth tables doesn't make sense to me.
Well, I would say a truth table is just a set of relations from pairs of symbols to symbols -- these relations are the rows of the truth table. Viewed as a bijenction between sets, an isomorphism would map rows of one truth table to rows of the other and vice versa. But we're not just talking about sets that have no additional structure. The elements of the set (the relations) have structure too. For me at least, an isomorphism is a mapping of all the elements and terms and relations in one structure to those of another. You don't just map the rows arbitrarily, you have to consistently map the values in the rows too. The truth tables of f1 and f2 are isomorphic under the mapping A->B, B->A. This value-level mapping implies a row-level mapping between rows of f1 to rows of f2, e.g. f1(A,B)=A maps to f2(B,A)=B.
> And if A, B, C, and D are different points, they aren't.
Agreed. But if B in the first figure is the same point as C in the other figure, and vice versa, they are the same figure. Different mathematicians might choose to label these points differently, but these differences in notation don't mean we have different figures. f2 is just f1 with different notational conventions if they are taken in isolation.
> Yes, well, and if we no longer assume that 'horse' means horse, it could mean buttercup.
DeleteI don't think that's what I'm doing. When you and I talk about horses, we are conversing in the context of a massively complex shared system of communication called "English", where 'horse' means horse and not buttercup. But if I have an isolated mathematical system f1, and you come along and define another isolated mathematical system f2, we're no longer talking the same language. Newton and Leibniz both invented calculus even though they used different notation -- the differences in notation don't mean that these are really different mathematical systems. So A in one has nothing to do with A in the other. What you're doing is akin to claiming to solve a complex polynomial equation for x in problem 3 on page 231 of a textbook by instead solving a simple equation for x you found in problem 1 on page 23 -- you're assuming that the two x's mean the same thing even though they are used in distinct and isolated contexts.
As with the textbook problems, it's a coincidence of no significance that we're using the same symbol in each. I'm saying that A has no meaning other than its role in f1, which means we have no basis other than its role in f1 by which to to compare it to the A in your f2. If we want to see if vertex A in my polygon ABCDE is the same vertex as your vertex A in your polygon ABCDE, we might do so by comparing the Cartesian coordinates of my A with the Cartesian coordinates of your A. If we want to see if my value A in my function f1 is the same as your value A in your function f2, we have to see whether it plays the same role -- i.e. whether a mapping of f1's A to f2's A will form part of an isomorphic mapping. Since it doesn't, it is not the same A. Instead, my A instead corresponds to your B.
> You can't really tell me both that 'A is meaningless' and that A means 'that symbol which is only output when it occurs twice'.
What I mean is that A has no inherent meaning, other than the role it plays in the system. It is not the first letter of the alphabet. It is not a maximum or a minimum value. It does not correspond to truth or falsity. It is nothing other than what you can infer from the definition of the system, and all you can infer from the definition is that it is "that symbol which is only output when it occurs twice".
> And as such, I can again change it: interpret A as meaning 'that symbol which is output when it occurs at least once'.
DeleteNo, you can't. That interpretation is inconsistent with the definition of the function.
> As I said, since our minds very much appear to have content, to have definite values
My claim is that the content of our minds comes from the patterns in the processing of signals which in isolation have no inherent meaning. Just as we can infer a meaning for A based on the role it plays in the system, so can the meaning in human minds come from the causal/functional roles the signals play. That meaning cannot be imported from outside the system.
> So really, parsimony here is just a statement of prejudice---you like the 'simpler' interpretation better.
As I've said, I agree with you that parsimony is not a good approach on this particular question -- that is more or less why I think physicalist computationalism does not work. I think parsimony is a good epistemic heuristic to choose between mutually-exclusive hypotheses (whether physical or metaphysical), but not a good way to settle what computation is objectively being computed, because there is no reason to believe that there has to be only one answer to that question.
But the appeal of the bit-flipping argument is that it seems to do away even with parsimony as a basis for a preferred interpretation. For the bit-flipping argument to be held up as more important or persuasive than the pixies argument, you need to assume that parsimony matters. If you dispense with parsimony, then you can use any interpretation you like, no matter how arcane, ad hoc and retrospective. Not only can you interpret an AND gate as computing AND or OR, you can interpret it as computing Microsoft Excel.
Hi DM,
Delete>But a brain or an AI doesn't have preconceptions about what it computes.
Which actually brings up a larger worry: when an AI, say in the course of a Turing test, claims that 4+3=7, we'd typically not take it to do so in a 'value-agnostic' way; it really means that if you take four, and add three, you get seven. Or at least, barring eliminativisim, that's what it needs to do in order to replicate human performance on this task.
So it seems, to me, that computations implementing a mind can't be value agnostic---for the Turing test to have any applicability, it must be the case that the inputs presented to an AI, and the outputs produced by it, must be value-definite. So it doesn't seem plausible that the computation implementing a mind could be value-agnostic.
I know that the usual answer to such worries is just to define a structure that's complex enough in order to not make it obvious anymore that if it's value-agnostic at the base, so to speak, it also is at the higher level---but I've yet to see any such attempt where it's not simply the impossibility to completely think through the system that makes it sufficiently opaque to hide the persistence of value-agnositicism.
>As soon as you introduce one, you've effectively "implied" a whole lot more
Yes; again, this is just Newman's problem. But the point I'm making is simply that I can effectively use the system you say implements f1 to implement f2---you keep claiming they're independent, but they're not: defining one unavoidably defines the other. It's like a painting, where the negative space defines the (outlines of the) figure, and vice versa: you can't have one without the other.
>But the structure that models the AND gate has only f1, say.
But f1 implies f2. There's no additional information needed: if you give me one, then you've implicitly given me the other.
>You don't need f1 to define f2.
That's not what I'm saying. I don't need f1---but f1 *suffices*.
>But they are only distinct if you create a structure which has both f1 and f2 at the same time
Which every structure does: as soon as it has f1, it has f2.
>From that perspective, we only have a manipulation of valueless bit-analogues
But isn't that just a description of the physical evolution of the system? The set of states it traverses?
I don't think this is sufficient as a definition of computation. I mean, basically, all you do is to just write down names for the states that the system could be in. There's nothing there except for, e.g., the phase-space trajectory. It doesn't seem plausible to me that 'computation performed by the system' could reduce to 'the phase-space trajectory traced out by the system': this trivializes the notion of computation.
I mean, take a look at what the central thesis of computationalism becomes under this view of computation: 'the brain gives rise to a mind by implementing the appropriate computation' becomes 'the brain gives rise to a mind by tracing out the appropriate phase-space trajectory', which really just means 'the brain gives rise to a mind by being the appropriate kind of physical object'. So really, this just leaves us back to square one: some physical objects give rise to minds. This, we knew already (at least, if we're not admitting some weird nonphysical elements).
>I thought we were discussing the bit-flipping argument where we adopt a bit-level perspective.
To me, the binary system is just a particularly simple alphabet for a computation to have; but the argument is really quite independent from the particular alphabet, as long as computation is a function over some alphabet. This is, effectively, what you aim to deny: that computations are functions over a given universe, with some definite inputs and outputs.
>But if I have an isolated mathematical system f1, and you come along and define another isolated mathematical system f2, we're no longer talking the same language.
DeleteBut f2 is defined using the same language---it's even defined in terms of f1! The function f1 includes all that I need in order to define f2. Anybody familiar with the definition of f1 will immediately understand when I define f2(x,y) as (f1(x',y'))'. I don't need to introduce anything new, that wasn't part of the language needed to define f1 in the first place.
>you're assuming that the two x's mean the same thing even though they are used in distinct and isolated contexts.
No---I define one x by means of the other, and the machinery needed to make that other one interpretable.
>What I mean is that A has no inherent meaning, other than the role it plays in the system.
And the role it plays in that system (and the role played by B, and the truth table relating the two) is sufficient to define its role in the system given by f2.
>No, you can't. That interpretation is inconsistent with the definition of the function.
Yes, you're right, I didn't really think that through. But actually, this was what prompted my worry about your 'computation' really just being a description of the physical behavior of the system---which, again, isn't what one would typically mean by 'computation'. Take, for instance, this very lovely and detailed description of a Turing machine that does binary addition (part 5). Now, most people would agree that this TM indeed performs exactly that function: addition of binary numbers. However, on your conception of computation this (and, in fact, all claims regarding TMs implementing abstract functions) would be false: it just changes some states of the tape to some 'value-agnostic' different states. In essence, this is just a description of the TMs operation, but not of the computation implemented thereby---no TM ever really computes anything, it merely evolves in time.
In particular, this plays havoc with the notions of computational equivalence and universality. Typically, for instance, one holds that certain TMs---universal ones---can simulate all other TMs. But there is no requirement that to do so, a UTM must physically mirror the evolution of the TM it emulates---for instance, a TM over the decimal numbers could simulate the addition, without thereby possessing anything like a mapping between its states and those of the binary addition-TM. So this notion wouldn't really make sense on your conception of computation.
Moreover, completely differently implemented systems can perform the same computation. An adder made of logical gates will, on the typical conception of computation, perform the same function as the binary addition-TM; but that doesn't mean that it mirrors the 'value-agnostic' process of that TM. Neither, of course, will have much in common with a neural network implementing that computation, or a random-access machine, and so on.
>My claim is that the content of our minds comes from the patterns in the processing of signals which in isolation have no inherent meaning.
DeleteBut how's that supposed to work? I mean, we could (on standard notions of computation) implement the computation performed by a mind using a network of AND-gates and negations. But, on value agnosticism, these AND-gates don't operate on truth values, but just shuffle voltages around. The output of a mind, however, isn't voltages; it's meaningful symbols. So, where does the transition take place? Say I have a simulation, using AND-gates and negation, of a mind solving the task of finding the AND of 0 and 1.
Now, of course, on value agnosticism, 0 and 1 aren't input into the system anywhere---rather, there is some pattern of voltages (meaningless symbols) at the input; these propagate through the network, producing, as output, again a pattern of meaningless symbols. Where does the meaning of 0 and 1 come from?
Furthermore, as you quite rightly point out, 0 and 1 aren't physical objects. But all your value-agnosticism really boils down to is physical description. So, again, where do 0 and 1 come from all of a sudden? If all the system does is the value-agnostic shuffling around of meaningless symbols, there's no 0 or 1 to be found anywhere in the system. Yet, we can meaningfully compute the AND of binary digits.
>Not only can you interpret an AND gate as computing AND or OR, you can interpret it as computing Microsoft Excel.
Well, that's not really true---there aren't enough states, for one---but I get your point. However, I was wondering why, if you hold to the value-agnostic approach, Putnam-style arguments ought to worry you: all Putnam really does is to point out that the interpretation of the physical states of a system is arbitrary; but you do away with such interpretation completely. So, if you have a Putnam-style open system, then there is always a definite value-agnostic interpretation: it's just the sequence of states it actually traverses, viewed as 'meaningless symbols'. Putnam's trick precisely is that one can imbue those with arbitrary interpretations---the exact move you're rejecting here, in favor of value-agnosticism.
So, if value-agnosticism were appropriate, it wouldn't just work against my argument, but equally well against those by Putnam, Searle, or Bishop. Yet, you seem to think these arguments work, at least against physicalist computationalism.
** "If you want to say otherwise, then you should be able to draw a sharp distinction between what a rock is doing and what a computer without input/output is doing."
ReplyDeleteHow about this: In the rock, all computations are occurring, and no given computation is preferred by the system itself. In the computer, all computations are also occurring (they're also all occurring in the computer's case!), but one is very clearly preferred by the system itself.
If you consider all the computations the computer is supposedly performing, there is only one clear winner. In the case of the rock, nothing other than interpretation selects a given computation.
This suggests the rock, if anything, is generating a kind of "white noise" of computation whereas the computer has a very clear, strong signal that rises above its similar noise.
One might also look at the energy or complexity required to extract the computation. In the computer that involves looking or listening to it. In the rock, looking and listening don't do much.
Perhaps that last works as an argument against pixies in that conscious experience may depend on the (putative) conscious computation being easy to extract from the background. Brains, as with computers, have a very strong signal.
I also think the idea that, in a rock or wall, one state doesn't *cause* the next is a powerful argument. In a "real" computation, they are casually linked.
** "This idea of a stable mapping that remains constant in time is perhaps a plausible angle to investigate."
One thing that comes up in the "dust" idea (which is essentially the pixies idea) is that the series of states need not be temporally ordered. The mere existence of such states somewhere in time, in any order, should lead to a linear consciousness.
This is clear from the idea of imagining an FSA for consciousness and putting time delays between steps. This shouldn't change the (putative) experience from within the FSA.