tag:blogger.com,1999:blog-58012785658561162152024-03-15T09:32:57.912+00:00Disagreeable MePhilosophical thoughts from an unapologetic contrarianDisagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.comBlogger69125tag:blogger.com,1999:blog-5801278565856116215.post-88945930574752850682023-05-09T19:34:00.007+01:002023-05-09T20:01:31.509+01:00A Multi-Level view of LLM Intentionality<p></p><div class="separator" style="clear: both; text-align: center;"><a href="https://blogger.googleusercontent.com/img/a/AVvXsEgsgtRhzuTgQJ0ntBD1J9J3mDdbAEvpXeWKtByS0gfOomtfw0jh5xaTZfFh2xG5ksm6YXHgM65rl0HusCLIld7zUAiT3Zf4ZOy7q7PNpuCj4fDnIae1rRSuTlsmMAXm8-d1ImmOf7hfG9px96kBrk9F3DVnZ7uxq4wZfaMADH_kw7JP7CT-PW1y8s0Y" style="margin-left: 1em; margin-right: 1em;"><img alt="Bing AI's interpretation of a mind within a mind within a mind" data-original-height="1024" data-original-width="1024" height="240" src="https://blogger.googleusercontent.com/img/a/AVvXsEgsgtRhzuTgQJ0ntBD1J9J3mDdbAEvpXeWKtByS0gfOomtfw0jh5xaTZfFh2xG5ksm6YXHgM65rl0HusCLIld7zUAiT3Zf4ZOy7q7PNpuCj4fDnIae1rRSuTlsmMAXm8-d1ImmOf7hfG9px96kBrk9F3DVnZ7uxq4wZfaMADH_kw7JP7CT-PW1y8s0Y" width="240" /></a></div><br /><p></p><p>Prompted by Keith Frankish's <a href="https://www.youtube.com/watch?v=AmICd8xUttA" target="_blank">recent streamed discussion of LLM intentionality on YouTube</a>, there's a particular idea I wanted to share which I'm not sure is widely enough appreciated but which I think gives a valuable perspective from which to think about LLMs and what kinds of intentions they may have. This is not an original idea of my own -- at least some of my thinking on this was sparked by reading about the <a href="https://www.lesswrong.com/posts/D7PumeYTDPfBTp3i7/the-waluigi-effect-mega-post">Waluigi problem</a> on LessWrong.</p><p>In this post, I'm going to be making the case that LLMs might have intentionality much like ours, but please understand that I'm making a point in principle and not so much arguing for the capabilities of current LLMs, which are probably not there yet. I'm going to be talking about what scope there is to give the benefit of the doubt to arbitrarily competent future LLMs, albeit ones that follow more or less the same paradigms as those of today. I'm going to try to undermine some proposed reasons for skepticism about the intentions or understanding of LLMs, not because I think the conclusions are wrong but because I think the arguments are too weak to support them.</p><p>I should also note that I will follow Keith (and Daniel Dennett, and others) in assuming an interpretivist account of intentions. That is, we should ascribe intentions to a system if and only if it helps to predict and explain the behaviour of the system. Whether it *really* has intentions beyond this is not a question I am attempting to answer (and I think that it is probably not determinate in any case).</p><p>The basic idea I want to introduce is that LLMs might have intentions and agency on multiple levels, so we may be missing something if we restrict our analysis to one level alone.</p><span><a name='more'></a></span><p>Let's start with the story of Blake Lemoine, who was famously fired from Google for spreading claims that its LLM LaMDA was conscious. Many were quick to ridicule him, pointing out that you can make LLMs claim anything you want them to. The very same LLM making claims to be conscious can also be made to say that it is not conscious. For many, this is evidence that LLMs do not have stable beliefs at all (certainly about themselves), so LLMs cannot be conscious. Similarly, you can ask an LLM to play the role of someone arguing for or against a particular proposition, and it will do so. Presumably, this means that the LLM itself has no opinions on such issues, only roles it can be asked to play. And presumably it no more experiences the intentional states of its roles as its own any more than a human actor on stage would the intentional states of a character in a play.</p><p>On to Keith's fascinating talk, where he expounds the view that we should see LLMs as playing a "chat game", where all they care about is making plausible utterances, and have no goals beyond this. Keith allows that the LLMs might have beliefs about the world, but that their intentionality is still severely lacking. In particular they have no illocutionary or perlocutionary intentions, as defined by philosopher J.L. Austin. An illocutionary act is an act that is necessarily performed via speech but which goes beyond mere speech and usually has real-world consequences (e.g. pronouncing someone guilty, or making a promise), whereas the perlocutionary act is the real-world consequence of a speech act on an interlocutor (e.g. shocking someone with an outrageous statement, or persuading someone of something). According to Keith, then, all an LLM wants to do is to deliver plausible utterances. It has no intentions that go beyond this. It doesn't intend to achieve anything with its utterances, and it doesn't intend to have any particular effect on its interlocutor. On the interpretivist account of intentionality, this is sufficient to predict what it will do and so we need no more.</p><p>However there's another way of looking at it. In the case of Searle's Chinese Room, which I discussed previously, we can see the man in the room (Searle) as an agent and the room or system as a quite distinct agent, each with its own beliefs and goals and other intentions. One speaks English, the other Chinese, but there's no reason to think that the differences should be confined to language alone: one may prefer American democracy and the other might prefer Chinese authoritarianism, for example. I think we can see what LLMs are doing in much the same way. We can see the underlying LLM, like Searle, as an agent that is only playing the chat game, as Keith suggests. But in order to play the chat game, it is, like an actor, playing a role, and that role, like the Chinese Room, may have intentions of its own.</p><p>Suppose we ask the LLM to play a game of chess with us. Unless you think that there is some fundamental reason why LLMs will never be able to play chess competently, and I doubt there is, then it seems that we could with the right prompts implement some sort of chess AI using an LLM. On Keith's analysis, it would seem that we should only ascribe "chat game" intentions to the LLM. It says "Qb4" only because it thinks that this is a plausible utterance for a player of chess by correspondence. And yet I'm pretty sure that Keith would ascribe intentions such as "putting the king in check" to a dedicated chess AI. It seems that Keith should therefore also ascribe the same intentions to a chess AI implemented via LLM. These would then be the illocutionary intentions he denies! By the locutionary act of saying "Qb4", the LLM performs the illocutionary act of moving the queen to b4 and putting the king in check.</p><p>Now suppose that instead of asking the LLM to play chess, we ask it to play the game of "therapy", where moves in the game consist of choosing lines to say in a dialog with an anxious patient. We prompt the LLM appropriately so that it knows it is to play the role of an expert and concerned therapist who wants to help the patient, just as before we prompted it to play the role of an expert chess player.</p><p>If it makes sense to interpret the chess playing LLM as having intentions about putting kings in check and so on, then I'm not sure why we should not interpret the therapy-playing LLM as having intentions about advising and helping the patient. If we do, then these would count as perlocutionary intentions, as they are intentions about how the speech acts should affect the interlocutor. Once again, even if the LLM is only playing the "chat game", that doesn't mean that it cannot do so by summoning a simulacrum with intentions of its own.</p><p>I think the mistake Keith is making is to restrict his analysis to the level of the LLM itself and to fail to consider that there may be a distinct agent supervening on top of it in much the same way as The Chinese Room supervenes on top of Searle. Perhaps the LLM itself is only playing the "chat game", but it has summoned a simulacrum which is playing another game altogether. The point is that Keith's analysis of the intentions of the LLM may be correct and yet there may still be illocutionary and perlocutionary intentions happening at other levels of analysis.</p><p>I think this sort of analysis shows how the instability of the views espoused by LLMs such as LaMDA doesn't mean much -- it could be that these are the views of different simulacra being summoned by the underlying LLM, and it could even be the case that some of these simulacra could be conscious, at least in principle.</p><p><span style="font-size: large;">Objections</span></p><p><span style="font-size: medium;">"We don't need to interpret the intentions at these higher levels"</span></p><p>Keith seemed to be of the view that we should not ascribe illocutionary or perlocutionary intentions to LLMs because we don't gain anything by doing so -- the "chat game" analysis is sufficient. I agree with Keith that we should not ascribe intentions where to do so is not helpful, but I fail to see why it is not helpful in the cases above. In order to know what is a plausible thing to say for an LLM competently playing the role of a chess player in a "chat game", I need to know what moves a chess player might plausibly want to play in a game of chess. By treating it as an agent with the intentions of a chess player, I get a handle on what it will do, and I don't see how I can do that without taking such a stance. As long as we're agreed on being interpretive about intentions, these illocutionary intentions seem to be indispensible and therefore unavoidable. And the same goes for an LLM playing the role of therapist and that role's perlocutionary intentions.</p><p><span style="font-size: medium;">"The behaviour of LLMs is nothing like the behaviour of social beings like humans"</span></p><p>Perhaps, but now we're talking about the limitations of current LLMs. Perhaps future LLMs may be competent enough to behave like humans if prompted to behave like humans. It's plausible that they could never get there with their current architecture, but we don't know that. If we want to give them the benefit of the doubt, then we should assume they can become arbitrarily competent unless we have a strong argument to the contrary. In any case, even a relatively dumb AI could plausibly have intentions to deceive, inform, or advise. I don't think you need to get to full human-level AGI to have perlocutionary intentions. Recent AIs such as Meta's Cicero which excel at the game of Diplomacy show as much. If you know Diplomacy, you know that this game is all about performing speech acts with decidedly perlocutionary intentions. These dedicated AGIs may not be strictly simply LLMs, but they do incorporate an LLM component, and I think they demonstrate the point in any case.</p><p><span style="font-size: medium;">"LLMs are just stochastic parrots, predicting the next token"</span></p><p>LLMs are black boxes. We don't know how they work. We only know what they have been selected to do. They have been selected to be good predictors of the next token, but how they achieve this is left open. It may be that the best way to predict the next token that would be uttered by a human is to simulate a human. And if the very best, future LLMs are indeed doing something like simulating humans, then why shouldn't they have all the intentional states of a human (at least on an interpretivist account of intentionality)?</p><p><span style="font-size: medium;">"LLMs have no stream of sensory information about the world -- all they have is text, so they cannot have intentions about the world in the way that we do."</span></p><p>Our brains don't exactly have a raw stream of sensation either -- all they have is nerve impulses. From these nerve impulses they have been trained both by evolution and in the course of development to construct a model of the world. I think that focusing on the fact that all LLMs see is text is a bit like focusing on how all brains see is nerve impulses. The sheer amount of data used to train LLMs is plausibly enough to give them as rich an understanding of the world as we have from our nerve impulses. Sure, it's not the same thing, but it seems enough to me to ground their intentions. In this view, all of human civilisation counts more or less as the sensory apparatus producing the "nerve impulses" of the text they have been trained on. I think this puts them as much in touch with the real world as we are. We perceive the real world via the nerve impuleses sent from our sensory apparatus, and they perceive the real world via the bit stream of text sent from their sensory apparatus. We learn to assemble this stream of nerve impulses into a rich model of sensation and perception and of the world itself, and they do the same with their text stream.</p><p>Once trained, LLMs do indeed have a pretty limited stream of sensory information, consisting only of the text typed in by an interlocutor. But I personally find it implausible that all intentionality disappears as soon as you stop getting a rich feed of sensory information. People in sensory deprivation tanks have intentions every bit as real as people outside them.</p><p><span style="font-size: medium;">"LLMs are passive, only responding to prompts and never taking action off their own bat"</span></p><p>Tell that to AutoGPT! This is a system which with a relatively small and simple bit of code wrapping ChatGPT, is able to make plans and execute them autonomously. There's nothing logically necessary or profound about the passivity of LLMs. It's trivial to make them into agents.</p><p><span style="font-size: medium;">"But it's just playing a role, like an actor. It doesn't experience these intentions."</span></p><p>This doesn't matter if we're being interpretivist. Besides, we don't know to what extent and in what detail a future LLM might be simulating the simulacra. If detailed enough, I'm not sure that the simulacrum might not experience the intentions even if like Searle in the Chinese room the LLM itself does not.</p>Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com7tag:blogger.com,1999:blog-5801278565856116215.post-24850783736371293352023-02-12T15:31:00.000+00:002023-02-12T15:31:16.444+00:00Psychophysical HarmonyI wanted to write about the argument from psychophysical harmony against naturalism, even though it has no force for those like me who think that talk of qualia and phenomenality in general is confused. For the argument to make any sense at all, we will first have to assume that such considerations as the Knowledge Argument (i.e. the Mary's Room thought experiment from Frank Jackson) have persuaded us that physicalism is inadequate for the task of accounting for phenomenal experience. The argument from psychophysical harmony then goes farther, suggesting that there cannot be any sort of neutral natural explanation at all for the mysterious appropriateness of our experience in representing the physical world. Instead we might need to invoke God, or if not God then perhaps some other benevolent purposeful force or principle such as John Leslie's axiarchism.<span><a name='more'></a></span><div><br />First let's look at what the argument actually is. There's a good presentation of the argument in <a href="https://philarchive.org/rec/CUTPHA">this paper</a> by Brian Cutter and Dustin Crummet, or you could watch from the linked part of <a href="https://www.youtube.com/live/8dKQ5rADqDA?feature=share&t=1366">this video</a> where Philip Goff gives his take on it and where it leads him. Of course I'll try to summarise some of the arguments here also.<br /><br />From here on out, I'm going to be adopting a false persona. I'm going to pretend, for the sake of argument, that I believe in qualia (that I am a qualiaphile). I'm probably going to portray a somewhat unsophisticated caricature of a qualiaphile, both because I cannot easily inhabit this mode of thought and because a simple qualiaphile's world view will hopefully serve to get the point across simply, even if a real qualiaphile could make it more plausible. So I'll be helping myself to an ersatz belief in some ideas that Daniel Dennett proposes (in Consciousness Explained and elsewhere) to ridicule the views of qualiaphiles: namely a "Cartesian theatre" where qualia are presented for the viewing pleasure of some sort of homunculus that lives in the mind, and "figment", the hypothetical mental substance with which the qualia of colours are painted. But I'm going to limit this qualiaphilia more or less to sensory qualia. I'm going to maintain the view that beliefs and desires and other intentional states are essentially functional -- that is I will not adopt a belief in cognitive phenomenology. I will also assume that standard naturalism in the sense of physical causal closure obtains -- i.e. I will reject libertarian free will and any role for phenomenal consciousness in explaining behaviour. OK, here goes!<br /><br />The basic idea is that we are presented with an astonishing coincidence when we consider how well our phenomenal experiences correspond to states of affairs in the physical world, and how misaligned these could have been. Cutter and Drummet present a number of different sorts of examples. Perhaps the simplest is the harmony between the affect associated with certain phenomena and the functional behaviours we have evolved to exhibit with respect to them: in general, we seek out pleasant experiences and we avoid unpleasant experiences. Our behaviour is attributable entirely to physical cause-and-effect, so there is no need for evolution or the natural world to ensure that our qualia are appropriate and give us reason to seek or avoid them.<br /><br />Take pain for example. In the real world, pain is extremely unpleasant, but consider the idea of a hypothetical subject who is functionally just the same as a normal human but who has a strange inversion of phenomenal experience. Unlike the more well-known thought experiment of the subject with inverted qualia, in this case we have a subject with inverted <i>affects </i>(that is, the emotional responses she has to stimuli are inverted). Being functionally normal, when she hits her thumb with a hammer, she yelps and jumps and winces, then nurses it and so on, but what she experiences is extremely pleasant. When she has her morning cup of coffee, she appears to savour the taste, smiles, exclaims that she loves her first sip of the day, etc., but the taste as experienced is in fact revolting to her. Her reports and facial expressions are just her way of communicating her disgust. Her avoidance of pleasant experiences and seeking of unpleasant experiences is because her experiencing self is essentially deranged and irrational, being someone who lacks psychophysical harmony. She smiles when she's miserable, she cries bitterly when she's ecstatic, she will do anything to avoid pleasure and so on. On the assumptions that underlie the argument from psychophysical harmony, this seems a possible state of affairs. More likely would be that phenomenal affect would be essentially random, neither wholly appropriate nor wholly inappropriate but more chaotic. It seems to be extraordinarily lucky that this is not how we are. Therefore, God (or whatever)!<br /><br />To me, even qualiaphile me, this argument is not very compelling for a couple of reasons. First, it's not obvious to me why affect should be intrinsic to qualia. It's plausible that affect could be entirely functional, and that the very same quale could be experienced by some subjects as pleasant and by others as unpleasant. Indeed, this seems certain when we consider acquired tastes. The qualia I experience when I taste wine now seem to be the same as when I first tasted it as a child and thought it was disgusting. But the associated affect has changed, as now I quite enjoy it. If this can be so for wine, then why assume that the affect of pain is intrinsic? The idea then is that the qualia we feel in response to various sensory inputs really are basically arbitrary, and have no intrinsic affect. We find pain to be unpleasant not because it is intrinsically unpleasant but because it plays the functional role of a noxious stimulus that is to be avoided -- this is just what it is to find something unpleasant. If this is so, then it is not conceivable that I could find pain to be pleasant while it plays the functional role it does.<br /><br />Another worry is whether someone with inverted affect is perhaps incoherent just because it's impossible to map all these upside-down affects consistently without either ending right back where we started or arriving at a contradiction.</div><div><br /></div><div>For instance, in the behaviour of avoiding pain, does our inverted affect subject want pain and fail to get it by systematically making choices that defeat her goals, or does she actively seek to avoid something that she would find pleasant and succeed? If we're turning everything upside-down, then she must want pain because a normal person would not. But, on the other hand, if a normal person wants to experience pleasure, and she finds pain to be pleasurable, then she must not want it, because she avoids pleasure. Contradiction!<br /><br />While experiencing pain, is our subject is actually experiencing something pleasant, while believing herself to be experiencing something unpleasant? If she believes herself to be experiencing something unpleasant, she should want the experience to continue, because of flipped affect. But if she is actually experiencing someting pleasant, then she should want the experience to end, because of flipped affect. Contradiction!<br /><br />(I always found the psychology of the classic Addams family to be implausibly incoherent for the same kinds of reasons -- haven't seen the new Netflix series though so maybe that makes more sense)<br /><br />Or maybe what we've done in constructing such a subject is just changed the language we use to describe her phenomenal states upside down without changing anything else. We're just relabelling everything unpleasant as pleasant and vice versa.<br /><br />Perhaps with some care we can construct a coherent subject like this by flipping some things and not others, but it seems clear to me that we can't just carelessly flip everything. On the other hand, it's not clear at all to me that any such psychophysically disharmonious but functionally normal subject is a coherent possibility.<br /><br />This is more a skeptical worry rather than a particularly strong counter-argument. Let's return to the first suggestion, that affect might be functional. In the Cutter and Crummett paper, this response is called the "contingent normative roles explanation" (specifically, it's the second version of this discussed in the paper) because the normative roles of qualia are contingent on the functional roles they play. To take up the thread here, I think it might be fun to switch to a dialog, because there will be some back and forth. My interlocutor is "Dusty", who is intended to represent the views of Dustin Crummett as I understand them from reading the paper and from a Twitter conversation. But that understanding may be limited, and I will be paraphrasing, extrapolating and synopsising, and probably doing an inadequate job all round of representing him fairly. Take this representation with a grain of salt.<br /><blockquote>Dusty: So I see you're a fan of the contingent normative roles explanation. This is addressed in the paper.<br /><br />Me: Great! How does the paper treat it?<br /><br />Dusty: Well, first, are we OK to imagine some sort of disembodied mind? Like a ghost, or maybe a brain in a vat or a Boltzmann brain?<br /><br />Me: Hmm, I guess. Let's say so for now...<br /><br />Dusty: Well, imagine such a mind is experiencing the most unimaginable torturous pain. This is clearly a bad state of affairs, right?<br /><br />Me: Agreed.<br /><br />Dusty: You say affect is functional, but there's nothing functional happening in this picture, so that can't be right.<br /><br />Me: Hold on, why is nothing functional happening?<br /><br />Dusty: Well, let me quote the paper... here it is... it says that a functional role is "a role defined in non-phenomenal terms, ultimately in terms of causal relations (however indirect) to outward behavior and physical stimuli." But there is no outward behaviour or physical stimuli in the case of a disembodied mind.<br /><br />Me: I see. Well, I'm not sure I agree with that account of functionalism.<br /><br />Dusty: Why not?<br /><br />Me: I think functions can be more abstract and "internal" than you allow. Just to illustrate, can we assume for the sake of argument that any mind can be simulated or modelled with the right sort of algorithm, (perhaps with something like a neural network).<br /><br />Dusty: OK.<br /><br />Me: Presumably, we could in principle identify parts of this algorithm that play certain functional roles. For instance, one particular cluster of neurons might seem to have something to do with avoidance behaviour, at least when the mind we're talking about is embodied. I think this is still the case when the mind is not embodied. Those neurons are part of an algorithm, and within the algorithm they play a functional role, even if the algorithm is not physically instantiated. Similarly, I think that parts of a disembodied mind must be playing functional roles, albeit internally and without any causal relations with the physical world.<br /><br />Dusty: Unless the "internal functioning" is something phenomenological, we can still ask about a conceptually possible case where we change the internal functioning and hold fixed the phenomenology--and there we say torture victim phenomenology is bad<br /><br />Me: I think that if you hold fixed the phenomenology but change the functional aspects then it's no longer necessarily true that the torture phenomenology is bad. I think the badness is entirely in the functional reactions.<br /><br />Dusty: I really don't think this issue is what's relevant. Our claim is that having the complete phenomenology of a torture victim is necessarily prima facie bad, and bad just in virtue of the phenomenology; you have to deny that, so that's where we part.<br /><br />Me: Well, yes. I deny that, not because I think it's OK to have the complete phenomenology of a torture victim, but because I think that you can't have the complete phenomenology of a torture victim (at least if this includes affect), without also having all the functional state of a torture victim, and it's the functional state that makes it bad. If this is true, then psychophysical harmony of affect is guaranteed.</blockquote>And that's the end of that thread so far.<br /><br />However, there remains a suite of other such examples exploring different aspects of psychophysical harmony. I think where most of these go wrong is that these examples only work if experiences are atomic, primitive, simple, whereas I would emphasise that most experiences are composite, made up of a great number of primitive qualia. The brain composes these qualia in such a way as to reflect what's going on in the physical world as a side-effect of its' functional job of integrating and synthesising information.<br /><br />Again, let's start from the simplest example. Consider the quale representing the colour orange. It seems more similar to that for red than to that for green, which is an example of psychophysical harmony because monochromatic orange light is closer in wavelength to red light than to green, while polychromatic orange light would tend to have more red-adjacent wavelengths making it up than green-adjacent. But from the phenomenal side, we could imagine orange seeming more similar to green than to red if the associated qualia were chosen arbitrarily, and we're in some sense lucky that this is not the case and our qualia are instead physically appropriate.<br /><br />Here, I think the problem is that even "orange" is a composite. I'll give a cartoonish illustration of how this could work, without claiming that this is how it actually works. Suppose that the visual system detects the incoming light as a mixture of reddish and yellowish, these both being primary colours of this particular visual system, at least once some visual processing is done. By the action of psychophysical laws mapping the visual system's representation of reddishness and yellowishness to particular qualia, we how have both qualia being presented simultaneously. We can think of this scenario as having a mixture of red "figment" and yellow "figment" presented on the Cartesian theatre for the enjoyment of the homunculus. The resulting experience is labelled "orange". As such, the similarlity of "orange" to "red" is not an accident, it is a consequence of the functional behaviour of the visual system as treating orange as a combination of reddishness and yellowishness.<div><br />In contrast, it seems to me that the qualia for primary figments are essentially arbitrary. I would judge that red is not obviously more similar to yellow than to blue for example, even though physically it is. So in the case of primary figments, there is no psychophysical harmony. Unlike light and dark, we do not experience differences in wavelength as a smooth continuum of some quantity, even though physically it is. This psychophysical disharmony is a consequence of how the visual system functions, and so pyschophysical harmony or disharmony is best understood as a purely functional issue.<br /><br /></div><blockquote><div>Dusty: I agree that the relation between particular color qualia and physical reality is essentially arbitrary. There is no sense in which our particular color spectrum is more fitting than an inverted spectrum of qualia, for example. But when I look at orange light, I don't perceive red or yellow qualia, so there can't be any literal sense in which orange is a compound of them.</div><div><br /></div><div>Me: Yes, just as you don't see red paint or yellow paint when you look at an orange mixture of red and yellow paint. I'm not claiming that we are always aware of the structure of our experiences, but I do claim that our experiences are composed of more primitive qualia.</div><div><br /></div><div>Dusty: None of this affects the argument. The explanation is not that the psychophysical laws are such that the result of the evolutionary process is beings whose experience reflects physical reality. Instead, this is what we are seeking to explain.</div><div><br /></div><div>Me: This problem goes away once you let go of the assumption that intentional states and other non-phenomenal aspects of cognition are not functional. The psychophysical laws only relate primitive brain representations to arbitrary qualia. Everything else could be functional. If the brain represents orange as being both reddish and yellowish at the same time (understood as a functional relation between functional representations), the resulting experience is perceived as being both reddish and yellowish in virtue of the psychophysical laws linking the functional representations to their respective qualia, and we call this experience "orange". Evolution has selected for a being that can functionally recognise that orange is similar to red. The psychophysical harmony is explained by functionalism about similarity judgements while requiring psychophysical laws only to provide arbitrary mappings between primitive qualia and primitive functional representations.</div><div><br /></div><div>Dusty (entirely imagined response): But aren't you helping yourself to unstated additional psychophysical laws all the same? If we experience red and yellow qualia at the same time, then why should they appear as one unified color? Why is it not more like seeing red in one eye and yellow in the other -- a distinct feeling of two separate qualia being overlaid rather than blended? Must there not be a psychophysical law to determine which way this is experienced?</div><div><br /></div><div>Me: Again, let us be functional about intentional states. The visual system seems to function in such a way that we believe we are seeing a unified colour, as can be confirmed by introspection or attending to reports about experience. If beliefs are functional, then it must be that we believe that the final experience is of a unified colour, and this constrains how the qualia can be combined. It must be by a sort of blending or interpolation, it cannot be any other sort of combination or our reports and beliefs would be different. My view is that the quale we call "orange" is the only one that satisfies these constraints. It's what you get when you combine qualia for red and yellow along with a belief that we're seeing a unified colour.</div></blockquote><div>In being functionalist about intentional states, the argument I've been making here seems to depend on rejecting cognitive phenomenalism, that there is a distinct phenomenal what-it-is-likeness to have a certain cognitive or intentional state. In fact, I think we can accommodate a kind of cognitive phenomenalism here if we again regard cognitive experiences as compound. There could be a primitive quale for what it is like to believe something, and then the experience of believing something would be a compound of this quale and some compound phenomenal representation of the thing that is believed. Breaking down experiences into primitive states means that they are guaranteed to mirror their functional analogs. All we require is psychophysical laws allowing arbitrary mappings between primitives in both realms.</div><div><br /></div><div>Considerations such as the Knowledge Argument, Philosophical Zombies, Leibniz's Mill and so on might convince us of the existence of qualia and perhaps even cognitive phenomenalism, but they don't rule out this picture and I'm not aware of anything that does. Many panpsychists, for example, think that all the qualia we experience are somehow built out of the primitive qualia experienced by particles. It is unlikely that there are particles experiencing the qualia of ennui, or confusion, or itchiness, or the taste of coffee. For panpsychists at least, it must therefore be possible to build new qualia out of simpler ones. I subscribe (or at least this persona subscribes) to a similar view. The way this happens is not arbitrary, but a result of how the brain represents experiences as composed of a number of primitives, each associated with a primitive quale by psychophysical laws. The correspondence between the final experience and the functional state of the brain is therefore far from an accident.</div><div><br /></div><div>There are other cases Crummett would like me to answer about semantics and epistemology, but I feel that this is enough for now. My strategy in answering those cases would be basically the same. Break functional states into primitives. Assume psychological laws providing mappings from functional primitives to arbitrary primitive qualia. Reconstruct the phenomenal states by combining the mapped qualia according to the relations between the functional primitives and our intentional states about how they relate. The resulting psychophysical harmony is logically necessary on this picture.<br /></div></div><div><br /></div><div>For Crummett and Goff, the argument from psychophysical harmony licenses us to believe in God, or that every particle is an agent responding rationally to its desires, or in other such supernatural explanations. From my point of view, the idea of radical psychophysical disharmony is so absurd that I think instead it should cause us to question the assumptions that lead to it and to bolster our credence in accounts of phenomenality that do not run into such problems in the first place.</div>Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com2tag:blogger.com,1999:blog-5801278565856116215.post-65706309447452085842022-04-12T22:50:00.001+01:002022-04-12T22:50:59.632+01:00The Distributed Brain Thought ExperimentI wanted to give a quick take on a thought experiment that has been doing the rounds on Twitter lately.<blockquote class="twitter-tweet"><p dir="ltr" lang="en">I thought I had heard all the consciousness thought experiments but at <a href="https://twitter.com/hashtag/cshlNeuroAI?src=hash&ref_src=twsrc%5Etfw">#cshlNeuroAI</a> <a href="https://twitter.com/LibedinskyLab?ref_src=twsrc%5Etfw">@LibedinskyLab</a> told me a new one that literally kept me up last night. This is Camilo’s idea but I’ve taken some liberties with the details. Answer the polls in the order that you read them. 1/</p>— Dr. Anna Schapiro (@AnnaSchapiro) <a href="https://twitter.com/AnnaSchapiro/status/1512866137809195011?ref_src=twsrc%5Etfw">April 9, 2022</a></blockquote><p>It's also worth reading this excellent write up by my friend at <a href="https://selfawarepatterns.com/2022/04/11/how-much-can-we-change-the-causality-of-the-brain-and-keep-consciousness/" target="_blank">selfawarepatterns.com</a>. That post also links a paper, which I confess I have not read. I'm going on the Twitter thread only.</p><p>The setup is to imagine being able to record and replay exactly what all the neurons in a brain are doing while experiencing something. When we replay, is the experience reproduced? What if the neurons are separated in space and in time?</p><p>I agree with whoever originally framed this thought experiment (Camilo?) that it poses a serious problem for physicalist functionalism. But it poses no problem at all for my view.</p><span><a name='more'></a></span><p>My view, if you recall, is that the mind is not a physical thing but an abstract pattern that evolution has happened upon and is exploiting in order to get a bag of meat to navigate its environment successfully and reproduce. To say that the mind is the brain is in my view mistaken -- it is like saying that the hexagon just is the shape of a cell in a bee hive. Both the mind and the hexagon are abstract structures and are not strictly identical to any physical realisation. Bees have happened upon the hexagon, humans have happened upon the mind.</p><p>There's some room for quibbling over what exactly it is we're recording and replaying. Setting aside what is technically possible in practice, I think the thought experiment is most interesting if we assume that we can record and replay absolutely everything about what a neuron is doing during the experience. This means not only firing but strengthening its response to some inputs and decreasing its response to others (what corresponds in a normal brain to strengthening and weakening synaptic connections).</p><p>How this relates to this problem is that the conscious experience of a mind is something that exists timelessly and platonically in a space of all possible minds and all possible experiences. So dividing up the brain and replaying the action of neurons makes no difference to the experience of the mind. But to do so in the fashion posited in the thought experiment causally disconnects that mind from interaction with the rest of the physical world, so while the experience still exists (as it would even if the brain had never existed in our physical world in the first place), it is no longer really part of our particular world in any meaningful sense. As such it is not of moral concern to us and it doesn't matter if we regard it as conscious or not.</p><p>I disagree with the analysis over at SelfAwarePatterns, which brings up a point about whether we can be conscious of anything if we don't remember it. I think this misses the point, because if we are conscious of something, we are conscious of it in the moment regardless of whether we remember it later. None of us are going to remember anything happening at the moment in two hundred years, but that doesn't mean we are not conscious now! There's no time limit mentioned for the duration of the conscious experience. It could be on the order of several minutes. I think this is certainly enough time to regard it as conscious even if forgotten immediately afterward.</p><p>That post argues that in step 3, when we block neurotransmitters, we are now blocking the ability to remember earlier attempts, so this is where consciousness is lost. This departs from my interpretation, as I allow that we can record and replay the effect of neurotransmitters in strengthening and weakening synaptic connections.</p><p>Even so, I see no important difference here from step 2. The activity of the neurons is replayed exactly, so there is no room for the subject to feel confused or any difference in memory recall in the moment. The neurons fire exactly as they did in step one, so if the subject is not confused in step 1, then she is not confused in step 2. If the subject does not recall earlier attempts in step 1, the subject does not recall earlier attempts in step 2. It cannot be that memory formation is completely lost, because at later times during the experience, the subject's neurons will fire as if earlier times in the experience are remembered.</p><p>(Perhaps a nitpick on this point, the post suggests that the problem is that in step 3 the subject cannot form new memories. On my interpretation of the thought experiment, I would say what makes the replay weird with respect to memory is not so much that new memories are not laid down, bur rather that memories of the original experience or any repeat experiments are blocked. Presumably the original experience allowed new memories to form, so the new experience should produce all the neural activity required for the same -- but without access to those original memories, so it's basically overwriting identical memories on top of the original).</p><p>The intuition for why consciousness must vanish with the potential to form or recall memories and so on seems to related to the fact that the causal structure of the brain has been interrupted, and the idea that causal structure is important for consciousness. I agree, but only because I already think consciousness is an entirely abstract structure. Relying on actual cause and effect is a problem for a physicalist because causal structure is a pattern we impose on nature. For actual physical causation to matter for consciousness, there would have to be an objective fact of the matter on what causal structure is implemented by a physical structure, and this does not seem to be on a sound scientific or philosophical footing (at least in my humble opinion). But if, as I believe, causal structure is entirely abstract, then there is no problem for me in identifying the mind with some causal structure. That causal structure is a reasonable interpretation of the action of an intact brain undergoing an experience, but it doesn't really describe what is happening when the neurons are separated in time and space and are just replaying individual behaviour.</p><p>The advantage for my view is that I can regard a brain as conscious without requiring that there be a definitive objective fact of the matter of what causal structure it is implementing. It is enough that there be a reasonable causal interpretation which captures how we see it interact with the world around it. I, and you, should reasonably interpret your brain to be implementing the causal structure of your mind. But the existence of your conscious experience is not dependant on that interpretation -- it always exists regardless of whether it is physically instantiated.</p><p>Unfortunately, I expect none of this makes very much sense to anyone who has not been steeping in the ramifications of my idiosyncratic views as long as I have.</p> <script async="" charset="utf-8" src="https://platform.twitter.com/widgets.js"></script>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com4tag:blogger.com,1999:blog-5801278565856116215.post-25883870573863859082022-03-29T15:20:00.004+01:002022-03-30T12:52:28.088+01:00Strong Emergence and Free Will<div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/YDiiqUWRipc" width="320" youtube-src-id="YDiiqUWRipc"></iframe></div><br /><p>In this post I will discuss whether the idea of strong emergence can help us to make sense of free will and a few related issues. These ideas have been on my mind lately largely due to the above discussion, but I've been thinking about this for a while.</p><span><a name='more'></a></span><h2 style="text-align: left;">Background</h2><p><i>(You may want to skim or skip this bit if you're familiar with the terms weak and strong emergence, libertarian free will, compatiblism, etc. Or read on if you want to double-check my understanding of same. There's also a couple of useful analogies about tornados and boiling points of compounds that may be thought-provoking)</i></p><p>Emergence is the general idea that we can describe the behaviour of a system at different levels, and we see different types of behaviour at different levels. If we think about a tornado, at a low level there are only individual air molecules bumping around. If we think at an intermediate level, we can see there are regions of high pressure and low pressure, and directions of air flow. At a very high level we can think of the tornado as a single object with a certain shape travelling at a certain velocity. We can say the high level tornado emerges from the low level collisions of air molecules.</p><p>It is often useful to use different laws of nature to describe the different levels of behaviour. At a low level, we might use Newtonian laws to model the collisions between particles, or if we want to be very precise for a small number of particles we might even go so far as to use the Schrödinger equation. At an intermediate level, we might use the Navier-Stokes equations to model fluid dynamics. At a high level we might have mathematical models of the ways tornados tend to move and behave.</p><p>For tornados at least, the assumption is usually that these higher level descriptions are instances of weak emergence. This means that the higher level behaviour is in fact fixed by the facts and the laws at a low level. We imagine that an infinite intelligence knowing the low-level laws and the low-level state would be able to use those laws to simulate the entire system and so predict not only the behaviour of individual particles but of the whole tornado. The reason we need the higher level descriptions in practice is then pragmatic and not fundamental -- it is only that high level descriptions are vastly more computationally efficient than low level descriptions, and not that they are required in principle to model high level behaviour.</p><p>However, until we actually derive the high level from the low level in a computation of some sort, this remains an assumption. I'm pretty sure, for instance, that nobody has been able to calculate the boiling point of water at standard pressure from first principles of particle physics, because it is absurdly computationally difficult to do so with current technology and analytic techniques. We have faith that it should be possible in principle, but we cannot be sure until we do it.</p><p>An alternative logical possibility is that there is just a fundamental law of nature governing boiling points. This law might give the result that water boils at 100°C at standard pressure, superceding anything we might be able to calculate from particle physics. This would be an example of strong emergence -- where high level descriptions and laws do not depend only on the laws of physics governing particles, but also on distinct high-level laws that in some way overrule or at least act in concert with the low-level laws to govern high-level behaviour.</p><p>We have little reason to believe that there are such strongly emergent laws governing the boiling points of chemical compounds. Where strong emergence perhaps has the most purchase is in discussion around free will. We all feel like we have free will and are responsible for our own decisions and behaviour. However, at least at first, this seems to be in tension with the idea that our behaviour is determined by the low-level physics of particle interactions. There are three common broad approaches to resolving this seeming conflict.</p><p><b>Compatibilism</b>: We can reconcile the two pictures. Free will exists and we are responsible for our actions. Free will is a weakly emergent phenomenon ultimately arising from from particle interactions.</p><p><b>Hard Determinism</b>: There is no free will. We can divide this further into fatalism, where everything is pointless and reasons are fantasies, or something more like compatibilism, where we might deny free will but we still try to make sense of something like reasons and responsibility and decision-making.</p><p><b>Libertarianism</b>: Our behaviour is not governed by low-level physics. We are responsible for our own actions in a way that cannot be reduced to lower levels of description.</p><p>Personally, I'm somewhere between compatibilism and hard determinism. I think our behaviour is weakly emergent from low-level physics, but even so we can make sense of concepts analogous to decision-making, personal responsibility and free will. In such a world, I think it's a matter of convention whether we describe it by saying that free will exists or that it doesn't, and I have no strong preference. It depends only on whether we mean "compatibilist free will" or "libertarian free will".</p><p>But it's the third option I want to discuss, and in particular how strong emergence does or does not help make sense of it.</p><h2 style="text-align: left;">Strong Emergence and Libertarianism</h2><div>Perhaps a good place to start on this is the recent Mind Chat discussion at the top of this post between Philip Goff, Keith Frankish and their guest Helen Steward, who defends a thesis something like libertarianism (though she doesn't like the term, preferring to call herself an "agency incompatibilist"), making use of a strongly emergent idea of agent causation. I confess I have not yet had an opportunity to read her book, <i>A Metaphysics for Freedom</i>, but I have done my best to understand her viewpoint by following the aforementioned discussion and reading a little about it on reviews such as <a href="https://philosophynow.org/issues/105/A_Metaphysics_For_Freedom_by_Helen_Steward" target="_blank">this</a>. As such, I'm open to correction if I'm getting some of her approach wrong.</div><div><br /></div><div>Steward's view can be regarded as an instance of strong emergentism, in that she argues that agents such as humans and animals are the causes of their behaviour in a way which cannot be reduced to low-level physics. On this view, the behaviour of agents can only be explained with reference to concepts such as reasons and desires, and we will not get a correct picture of reality by simulating the low-level laws of physics alone. She describes a world where you could perform such a low-level simulation successfully as mechanistic and Laplacian (after <a href="https://en.wikipedia.org/wiki/Laplace%27s_demon" target="_blank">Laplace's demon</a>), and says that we no longer have any reason to think that the universe is like that since Newtonian physics has been overturned.</div><div><br /></div><div>As far as I can tell, the main motivation for denying that the universe is mechanistic is that we are obliged to believe in agent-level concepts such as free will, reasons, etc, while pointing at some difficulties in the compatibilist program for reconciling these vital concepts with a mechanistic universe.</div><div><br /></div><div>We are obliged to believe in free will in part due to the intuitive difficulty in dispelling such notions, but more importantly because the structure of society depends on notions such as personal responsibility and agency.</div><div><br /></div><div>The difficulties with compatibilism seem to be that overly simplistic models of decision-making in a compatibilist framework cannot make sense of various corner cases. For example, a compatibilist might say that we perform an action intentionally if the action is caused by our beliefs and desires. However, consider the following case proposed by Donald Davidson in 1973.</div><blockquote>A climber might want to rid himself of the weight and danger of holding another man on a rope, and he might know that by loosening his hold on the rope he could rid himself of the weight and danger. This belief and want might so unnerve him as to cause him to loosen his hold, and yet it might be the case that he never chose to loosen his hold, nor did he do it intentionally.</blockquote><p>According to Steward, given that we have such reasons to prefer something like strongly emergent agency causation, then the burden of proof should be on those who say that the universe is mechanistic.</p><p>I don't want to get into these motivations too much but I'll briefly say that just because society depends on the belief in agent causation doesn't really weigh in favour of its existence, any more than a society's dependence on a religious framework means that God exists. We may wish it existed, but believing something because we want it to be true is the informal fallacy of wishful thinking. Secondly, just because it is difficult to make sense of compatibilism in some corner cases doesn't mean that it is impossible to do so. Even if we cannot, it may still be the case that compatibilism works fine most of the time, which is perhaps all we need to build a functioning society. All-in-all, it seems to me that the reasons to believe in agency causation are extremely weak and we should disbelieve in it on grounds of parsimony alone. We have no need to postulate agency causation to explain any of our empirical observations, so we should not.</p><p>But setting all this aside, let's entertain the idea that agency causation does exist. Some interesting issues arise.</p><h2 style="text-align: left;">The Issues</h2><div>Three interesting and related issues in particular have been raised in recent discussions I've followed or taken part in on this topic.</div><div><ol style="text-align: left;"><li>Is strong emergence compatible with naturalism?</li><li>Does strong emergence imply that our current understanding of physics is incorrect?</li><li>Does strong emergence provide a sound basis for libertarianism?</li></ol></div><h3 style="text-align: left;">Strong Emergence and Naturalism</h3><div>Proponents of libertarianism are often taken to be denying naturalism, the proposition that supernatural phenomena do not exist. This is particularly so if they talk of a role for a soul, a spirit or a non-material mind of some kind in decision-making. Many of them are happy to accept that they are proposing a supernatural world view, particularly if arguing from a theistic perspective.</div><div><br /></div><div>This is not the case for Helen Steward. She believes that if agent causation exists, it is a natural phenomenon, and so libertarianism is compatible with naturalism in her view.</div><div><br /></div><div>However I think we need some sort of criterion to distinguish what we mean by natural or supernatural. A believer in ghosts, prayer or telepathy could claim to be a naturalist by saying that these are natural phenomena. If we can call anything "natural", then "naturalism" seems to be an empty position. I've <a href="http://disagreeableme.blogspot.com/2012/06/super-naturalism.html" target="_blank">delved into this topic</a> earlier and concluded that natural phenomena are those that can be said to be governed by, supervene on, or weakly emerge from laws of physics that are expressible mathematically. This means that, in my view, strong emergence is only compatible with naturalism if the laws on which it ultimately depends can be expressed mathematically. Even ghosts might be natural, but if so, they must be understood to arise somehow out of the laws of physics, whatever they may be. The hallmark of the supernatural is to suggest that some things are irreducibly mysterious and cannot be understood by the reductive methods of naturalistic science.</div><div><br /></div><div>Note that I posit weak emergence as part of a naturalistic world view, but this doesn't rule out strong emergence. As long as the strongly emergent laws are expressible mathematically, then they can act as a basis for further higher level phenomena to emerge from weakly.</div><div><br /></div><div>I think it is plausible for a mathematically robust strongly emergent law to govern the boiling of water. It's not too hard to imagine some absolutely objective criterion for what constitutes a molecule of water, and strongly emergent laws that describe how it interacts with other similar molecules and so fix the boiling point of water.</div><div><br /></div><div>It's a little less easy to see how strongly emergent laws would apply to agents. A water molecule is pretty clear cut. There is relatively little ambiguity about what counts as a water molecule and what does not (though there is some, e.g. whether heavy water counts). But mathematically rigorous strongly emergent laws to describe the behaviour of agents would necessitate that there be objective criteria by which we could unambiguously say what counts as an agent and what does not. This needs to be absolutely clear, as the universe needs to "know" when to apply the laws for particles and when to apply the laws describing agents. But this seems implausible -- even Steward admits that it's not clear what should count as a mechanism and what should count as an agent, e.g. which category appluies to bacteria.</div><div><br /></div><div>Once you've identified what counts as an agent, you then need to have some mathematically precise laws to describe how the agent behaves. This might take the form of some sort of algorithm, for instance. This is perhaps less of a problem, but it does lead to the rather unintuitive picture where the high-level algorithm a system is running directly influences the low-level "hardware" running that same algorithm. This kind of top-down causation should seem thoroughly alien to a computer scientist such as myself, but it's hard to rule it out as a logical possibility.</div><div><br /></div><div>So strong emergence itself does seem to be compatible with naturalism, but agent causation seems a bit of a stretch.</div><div><br /></div><h3 style="text-align: left;">Strong Emergence and the Core Theory</h3><div>This topic came up in <a href="https://www.youtube.com/watch?v=qcCEZzNCNBI" target="_blank">an earlier Mind Chat</a> with Sean Carroll. It was a long and interesting discussion, but much of it was taken up with a somewhat unproductive impasse between Goff and Carroll on whether strong emergence would be in conflict with the core theory of modern physics, composed of the standard model of particle physics taken together with general relativity. They are due for a rematch on this topic in a few months, which I look forward to greatly. Briefly, Goff thinks that strongly emergent laws could co-exist with the core theory without overturning it, but Carroll disagrees. Carroll says that if strongly emergent behaviour exists in the universe, then physics as we know it is simply wrong. That doesn't mean of course that strongly emergent behaviour is ruled out, but he does want to be clear about what that would mean for our understanding of physics.</div><div><br /></div><div>I think this is perhaps a semantic disagreement on what it means for a theory of physics to be incorrect. We might want to say that a theory is correct but incomplete if it has a limited domain of applicability. <a href="https://personal.lse.ac.uk/ROBERT49/teaching/ph201/Week09_Cartwright.pdf">Nancy Cartwright</a> (if I understand her correctly) has proposed that nature may be a patchwork of different domains wherein different laws apply, each law being correct within its own domain. This picture might help to reconcile issues such as the famous incompatibility of quantum mechanics with general relativity, were it not for the fact that there seem to be situations where the two domains overlap (such as inside black holes or near the Big Bang). Agent causation would fit well with this patchwork view, as it proposes a new domain wherein the core theory no longer quite applies and we need a theory that accounts for agent causation instead. The point being that the core theory would still be correct in its own domain.</div><div><br /></div><div>The problem with this view is that the core theory purports to completely describe what all particles do everywhere in the universe. It doesn't recognise any limits to its domain. If the world is a patchwork, then the core theory is incorrect for this reason alone.</div><div><br /></div><div>Another tack may be that the core theory and strongly emergent laws work together somehow. On this view, the core theory is indeed acting on the particles in our brain, it's just that there are also influences from strongly emergent laws. On this view, again the core theory is correct, just insufficient. But again, this amounts to refuting the core theory, because the core theory purports to be an exhaustive account of the influences on particles.</div><div><br /></div><div>I'm more in sympathy with Carroll here. If strong emergence exists, then the core theory is wrong, just because the core theory as interpreted by physicists purports to be complete. Even so, if we accept Goff's position that a rejection of the core theory's pretentions to completeness does not amount to a refutation of the core theory itself, then there is a sense in which the core theory could still be correct in certain domains even if strongly emergent phenomena do exist. I suspect Goff (and Cartwright) may be underestimating the difficulty of making such a patchwork coherent though.</div><div><br /></div><h3 style="text-align: left;">Strong Emergence as a basis for Libertarianism</h3><div>I think there's a clear problem with naturalistic and strongly emergent agent causation as a basis for libertarianism, which is that if it is naturalistic, then it is necessarily mechanistic. We may not be able to explain our behaviour by working up from the low-level physics of particles, but we will be able to do so by working up from the higher-level laws of agent causation, whatever they turn out to be. As far as I can see, all the same issues arise here as arise here for agents weakly emerging from low-level physics. Both visions are ultimately equally mechanistic.</div><div><br /></div><div>If behaviour weakly emerges from low-level physics, we can tell a story where this agent made a choice not because she was "bad" and "deserves to be punished" but because these particles interacted with those particles. If it emerges instead from laws of agent causation, then we can tell a story where the choice was because of the algorithm the agent was running. The exact same issues arise. Without agent causation, we have the problem that an agent is not ultimately responsible for the state or interactions of the particles driving her behaviour. With agent causation we can similarly say that the agent is not ultimately responsible for whatever the state is in the agent-causation picture. Just as with particle-level physics, the agent's behaviour is ultimately caused by mechanistic prior causes going back all the way to the Big Bang. Everything is still explained by mechanisms built upon the laws of physics. The only difference is what form the true description of the laws of physics take, and whether those laws of physics recognise her as an agent.</div><div><br /></div><div>In the Mind Chat episode (1:55:22), Steward is posed a question from an audience member who may or may not be yours truly.</div><div><blockquote>C<ytd-transcript-segment-renderer class="style-scope ytd-transcript-segment-list-renderer">an we simulate or model down top-down causation? If so how is this not deterministic? I</ytd-transcript-segment-renderer><ytd-transcript-segment-renderer class="style-scope ytd-transcript-segment-list-renderer">f not then it seems the rules or principles of top-down causation are not well defined.</ytd-transcript-segment-renderer></blockquote><ytd-transcript-segment-renderer class="style-scope ytd-transcript-segment-list-renderer"></ytd-transcript-segment-renderer></div><div>Steward's response:</div><blockquote><div><ytd-transcript-segment-renderer class="style-scope ytd-transcript-segment-list-renderer">I'm not the sort of person who could write the program that's for sure! But</ytd-transcript-segment-renderer><ytd-transcript-segment-renderer class="style-scope ytd-transcript-segment-list-renderer"> I would imagine it could be simulated, and I'm a bit puzzled by the </ytd-transcript-segment-renderer><ytd-transcript-segment-renderer class="style-scope ytd-transcript-segment-list-renderer">suggestion that that would make it deterministic because as I understand it simulation is often most valuable and </ytd-transcript-segment-renderer><ytd-transcript-segment-renderer class="style-scope ytd-transcript-segment-list-renderer">useful in connection with systems that are thought to be indeterministic, </ytd-transcript-segment-renderer><ytd-transcript-segment-renderer class="style-scope ytd-transcript-segment-list-renderer">or at least full of massive chaos</ytd-transcript-segment-renderer></div></blockquote><p>First of all, this is an unprepared off-the-cuff response, so we should be charitable. It looks like she has not considered this suggestion before. I assume that had I more time to explain the issues I perceive, and had she more time to explain her position, then we might do better to get to the heart of the issue.</p><p>In particular, we should not take her to task too much for conflating chaos and indeterminism. I completely agree with Keith's follow up explanation that chaos is not at all incompatible with determinism.</p><p>But I think the issue here is not really with determinism per se but with whether we can capture the behaviour of agents mechanistically. If we can indeed faithfully simulate it, even if we do so by introducing a little indeterminism in the form of pseudorandomness (or even real randomness coming from some sort of quantum measurement), then what we are doing is implementing agent behaviour mechanistically.</p><p>Perhaps Steward would want to say that though we could simulate agent causation, such a simulation would not in fact give rise to a true agent, but just a simulacrum of one which may be good enough for many purposes even if not perfect. If so, we're back to the same dilemma. Either we can make an absolutely perfect simulation of an agent, in which case agent causation is mechanistic, or we cannot, in which agent causation is not well-defined (and starts to look supernatural by my lights).</p><p>Perhaps Steward would say that a simulated agent would have behaviour indistinguishable from a true agent, but would nevertheless not be a true agent. In that case, it is possible to get perfectly agent-like behaviour without true agents. This means that we have no reason to suppose that true agents exist -- because any evidence we could find to support the existence of true agents would also support the existence of pseudo-agents. In particular, there would be no way for evolution to select for true agency in favour of pseudo-agency. Rather as in the philosophical zombie argument for phenomenal consciousness, the only reason we have for supposing that true agents exist at all is unevidenced personal conviction that we ourselves are true agents, which is rather unsatisfactory.</p><p>As such, it seems to me that we are forced to conclude that agent causation is either supernatural or redundant. Strongly emergent naturalistic agent causation does not work as a basis for libertarian free will.</p>Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com40tag:blogger.com,1999:blog-5801278565856116215.post-73759706152644190022022-02-05T10:09:00.000+00:002022-02-05T10:09:33.175+00:00Applying Illusionism to Physical Reality<p>Many of you would be aware already of illusionism, if in fact there were many readers of this blog. But in case it's needed, illusionism, exemplified by Keith Frankish and Daniel Dennett, is an approach to the philosophy of consciousness which claims that qualia (the ineffable irreducible feelings associated with conscious experience, e.g. the redness of red), are entirely illusory and do not actually exist. As such, on this view the Hard Problem of Consciousness as defined by David Chalmers simply dissolves -- we need only explain why we believe we experience qualia, we do not need to explain how it is that qualia can be produced by physical stuff. As a bonus, illusionism may claim that even the idea of qualia are incoherent.</p><p>Illusionism is, I would say, a species of functionalism, and so is compatible with and largely overlaps with other glosses on functionalism such as computationalism. There may be corner cases where some illusionists may disagree with some computationalists, especially on how best to describe things, but the stories they tell about human consciousness are mostly compatible. As such, I count myself as an illusionist, a functionalist and a computationalist.</p><p>What I wanted to write about today is a strong rhyming I've noticed between the illusionist view of consciousness and how I think of the stuff of physical reality.</p><span><a name='more'></a></span><p>The idea that reality itself is fundamentally structural and nothing more is a view known as Ontic Structural Realism (OSR), as exemplified by James Ladyman and Don Ross in the academic book <i>Every Thing Must Go: Metaphysics Naturalized</i>. I've written about a related idea from Max Tegmark <a href="http://disagreeableme.blogspot.com/2013/12/the-universe-is-made-of-mathematics.html">on this blog</a> before, the Mathematical Universe Hypothesis (MUH). The MUH is in my view just a slightly stronger claim than OSR. While OSR asserts that all that physically exists is structure (in the mathematical sense), the MUH proposes that not only is our universe a structural/mathematical object, but that it only exists at all in an abstract platonic* sense. On the MUH, the universe appears to be physical to us just because we are in it, and as a corollary all other possible universes exist in the same way (I don't think OSR tends to emphasize or necessarily accept this multiverse idea if it is discussed at all). As noted previously, I accept the MUH which means I'm also committed to OSR.</p><p>I see a number of parallels here between OSR/MUH and illusionism, so much so that I feel that to be consistent anyone who accepts one should perhaps accept the other. More precisely, I think illusionism/computationalism implies the MUH. As such, you could take this to be an argument for the MUH if you are an illusionist, or an argument against illusionism/computationalism if you reject the MUH.</p><p>The parallels I see come into focus most sharply if I construe the MUH view in particular to be eliminating the concept of objective physical existence, or at least deflating it in much the same way that illusionists deflate qualia/consciousness. But even the weaker OSR eliminates a similar concept -- that the structure of reality needs non-structural "physical stuff" to ground it.</p>
<style type="text/css">
.parallels {
font-size: 80%;
border-collapse: collapse
}
.parallels td, .parallels th {
padding: 10px;
}
.parallels tbody td, .parallels tbody th {
text-align: left;
vertical-align: top;
border-top: 1px solid
}
</style>
<table class="parallels">
<thead><tr>
<th width="18%">Parallel</th>
<th width="41%">Illusionism</th>
<th width="41%">OSR / MUH</th>
</tr>
</thead>
<tbody>
<tr>
<th>Illusion</th>
<td>Qualia as we intuitively conceive of them are illusory.</td>
<td>Physical stuff/existence as we intuitively conceive of it is illusory.</td>
</tr>
<tr>
<th>Explanation</th>
<td>We don't need to explain how qualia arise. We only need to explain why we perceive them to exist.
</td>
<td>We don't need to explain what "breathes fire into the equations" of physics or why this world physically exists, or what physical stuff really is. We only need to explain why we perceive it as existing physically.
</td>
</tr>
<tr>
<th>Substrate independence</th>
<td>Consciousness is substrate independent.
</td>
<td>Physical reality is so substrate independent it doesn't even need a substrate!
</td>
</tr>
<tr>
<th>Abstractness</th>
<td>Consciousness is realised by and characterised by certain kinds of "functions" rather than certain kinds of stuff, and so is analogous to software, making it somewhat abstract in nature.
</td>
<td>Our physical reality is realised by certain kinds of structural relations rather than certain kinds of relation-bearing stuff, making it somewhat abstract in nature.
</td>
</tr>
<tr>
<th>Zombies</th>
<td>With regard to philosophical zombies (a thought-experimental entity physically identical to us but which does not experience consciousness), we have two options. We can either say that zombies are inconceivable and cannot exist in any possible world, or we can go in completely the opposite direction and say that we are all zombies. These are just two ways of expressing the same fundamental idea -- that illusionism collapses the distinction between humans and zombies. So we can say that either we're all humans or we're all zombies because there is no difference. The concept that we use to distinguish between them, namely phenomenological consciousness, is rejected.
</td>
<td>On the MUH, if not OSR, the parallel to a philosophical zombie is a possible world which does not physically exist, which I will call a "zombie world" (a possible world is any world which could exist or could have existed without contradiction or incoherency). With regard to zombie worlds, we have two options. On the MUH, we can either say zombie worlds are inconceivable, or we can say that all possible worlds (including this one) are zombie worlds. Either all worlds are physical or all worlds are abstract because there is no difference. The concept that we use to distinguish between them, namely objective physical existence, is rejected.</td>
</tr>
<tr>
<th>Privacy</th>
<td>The problem of other minds is that it isn't possible to determine empirically if other beings are conscious because consciousness is intrinsically subjective. One might therefore say that there is no such thing as objective consciousness. We are only really conscious from our own point of view. On functionalism, we should assume that entities that process information much as we do are likewise conscious (if only to themselves).</td>
<td>The problem of other worlds is that it isn't possible to determine empirically if other causally-disconnected universes physically exist. One might therefore say that there is no such thing as objective physical existence. A world only really physically exists from a point of view within it. On the MUH, we should assume that other possible worlds are likewise physical (if only from points of view within).</td>
</tr>
<tr>
<th>Objection!</th>
<td>Illusionists are not impressed by the objection "How can we perceive an illusion in the first place if we are not conscious?". This is because they believe that we can give a functional account of beliefs and perception.</td>
<td>MUH proponents are not impressed by the objection "How can we perceive an illusion in the first place if we do not physically exist?" This is because they believe that we can be grounded in a platonic mathematical account of existence.</td>
</tr>
</tbody></table>
<p>Again, the parallels seem so strong to me I guess it's no wonder I'm attracted to both illusionism and the MUH. But I wonder how one could accept one without accepting the other, and I think most illusionists probably do reject the MUH. As an example, while not an illusionist per se, I know Sean Carroll is somewhat sympathetic to illusionism but seems to regard the MUH as nonsense. I'd love to know what Frankish or Dennett think of it but I haven't seen anything from them on it.</p><p>But it goes further than that this curious extended analogy. As I've written before, there is a <a href="http://disagreeableme.blogspot.com/2016/02/putnam-searle-and-bishop-failure-of.html" target="_blank">potentially fatal flaw with computationalism</a> which I think can only be <a href="http://disagreeableme.blogspot.com/2016/03/rescuing-computationalism-with-platonism.html">resolved by recourse to the MUH</a> or some other sort of modal realism (the idea that all possible worlds physically exist). So in my view, illusionists should also be MUH proponents to be consistent, and computationalists should also be MUH proponents to be coherent.</p><p><span style="font-size: x-small;">* I prefer small-p "platonic" to big-P "Platonic" to indicate that I don't necessarily buy into much of Plato's philosophy and I'm just using it as a useful adjective for mathematical realism.</span></p>Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com9tag:blogger.com,1999:blog-5801278565856116215.post-56163346758242298982021-12-16T11:54:00.005+00:002021-12-16T12:03:56.230+00:00The Computer Metaphor<p>Is the brain actually a computer? Or is this a metaphor or analogy which is often taken too literally? This issue has come up a few times in the past couple of years, and I wanted to give my take because some common responses do not seem right to me.</p><p>I'm going to argue the brain really is literally a computer and that this is not a metaphor or analogy at all. Many of the experts who disagree with this are neuroscientists, biologists and philosophers. We have much to learn from them about how the brain works, but that's not the only issue. To answer the question of whether a brain is a computer we also need to consider what a computer is, and I feel this side of the equation is too often neglected.</p><span><a name='more'></a></span><p>Reasons to reject my thesis tend to fall into a number of common categories.</p><h3 style="text-align: left;">It's just the latest analogy</h3><div><br /></div><div>Matthew Cobb is an excellent communicator on the brain and a proponent of the view that analogies to various types of technology such as hydraulic control, telegraphs and now computers have both helped and hindered our understanding of the brain over the centuries.</div><div><br /></div><div>The idea that the brain could cause distant body parts to move as occurs in hydraulic automatons was proposed by Descartes. This may have moved the discussion on by outlining a sketch of an idea about how the brain might control the body, but we are clearly not literally hydraulic automatons. The telegraph was later proposed as a better analogy for how the brain sends electric signals along wires. But the nervous system is not a telegraph network either. Similarly, while the brain as computer may be an informative analogy to a point, for Cobb the brain is not literally a computer and thinking it is may be holding us back. Analogies are never perfect, and taking them too seriously can lead us in the wrong directions.</div><div><br /></div><div>I enjoyed his book "The Idea of the Brain" a great deal when I read it, but I have a nit to pick with the part discussing this subject, an extract of which forms <a href="https://www.theguardian.com/science/2020/feb/27/why-your-brain-is-not-a-computer-neuroscience-neural-networks-consciousness" target="_blank">this article in the guardian</a>, even while I think he makes a number of excellent points. Cobb convincingly argues that the computer analogy may be leading researchers astray when they take concepts from computer science such as encoding/decoding too literally and search for analogues in the brain. The tendency to treat neuroscience as akin to the project of reverse-engineering a piece of computing technology is not likely to be fruitful.</div><div><br /></div><div>Where we disagree is in what we mean by the term "computer". It seems to me that what Cobb has in mind is something like a twenty-first century technological artifact, but when I say the brain is a computer I mean something more general by the term, encompassing everything from the Antikythera mechanism to calculators to iPads to <a href="https://en.wikipedia.org/wiki/DNA_computing" target="_blank">DNA computers</a>, quantum computers and beyond. To me, a computer is anything that processes information in a way that could in principle be accomplished by a Turing Machine, or more specifically, anything which has this kind of information processing as its primary purpose. (<a href="https://medium.com/the-spike/yes-the-brain-is-a-computer-11f630cad736" target="_blank">This article from Blake Richards</a> goes into greater detail on this idea and is well worth a read).</div><div><br /></div><div>So, while thinking of the brain as something like a modern computing device may indeed be just the latest in a series of analogies, this does not mean that a brain is not actually a computer in the broader sense. I guess the reason people in this debate talk past each other is that the term is overloaded, and explaining exactly what we mean is cumbersome. Even so, it seems to me that anyone who says the brain is a computer is clearly using the generic term.</div><div><br /></div><div>The same is not true for those who deny that the brain is a computer. There are ways to deny that the brain is even a generic computer, and so that's what I'd be looking for in an article entitled "Why your brain is not a computer". But imagine seeing an article called "Why your heart is not a pump" and then reading a description of all the ways the heart is unlike a bicycle pump. Based on the arguments in Cobb's article, what he wants to say could perhaps be more accurately summarised as "The brain is quite unlike contemporary computing technology in a number of ways", which is accurate but not exactly catchy. As such, I don't really begrudge Cobb his phrasing too much, but my feeling is that the same charity is often not extended to those like me who make the contrary claim that the brain <i>is</i> a computer.</div><div><br /></div><h3 style="text-align: left;">The brain does not process information like a computer, though</h3><div><br /></div><div>Computers stereotypically process information by following algorithms, which are step by step "recipes" which can be decomposed into a sequence of many trivial atomic operations. The messy, chaotic and absurdly parallel computing environment of a biological brain is anything but classically algorithmic. When looking at how a brain "calculates" the predicted trajectory of a ball, it does not multiply variables or arrange information in neat data structures. Its behaviour is probablistic and unpredictable in detail. Even if in principle a Turing Machine or some other computer could perform the same information processing function as the brain by following some algorithm, the fact is that this is not how the brain itself does it. To call it a computer obscures this fact.</div><div><br /></div><div>I don't accept this, because as long as the laws of physics are computable (a claim which I concede may be disputed), then there is some algorithm which would simulate the operation of the brain. In virtue of running this algorithm, a classic Turing Machine could perform the same information processing function as the brain, and in much the same way as the brain, and do so by following an algorithm.</div><div><br /></div><div>We can interpret the brain as an instantiation of a machine computing this algorithm -- as a computer simulating a brain. If there are variables and data structures in such an algorithm, they are not tracking the position or velocity of objects such as balls in flight but instead the state of neurons and neurotransmitters. If balls are represented at all in the algorithm (and I expect they are), it is at a higher emergent level far removed from the algorithm itself.</div><div><br /></div><div>This "simulation" gambit is why I think it's generally fair to call any information processing system a computer even if computers follow algorithms and the system does not at first glance appear to be following an algorithm.</div><div><br /></div><h3 style="text-align: left;">But then everything is a computer!</h3><div><br /></div><div>It's true that we can interpret anything as a computer, computing the function that describes its own behaviour as I just did for the brain. It is generally agreed that such interpretations are not usually particularly useful. As such, it's important to note that I am not merely claiming that the brain can be interpreted as a computer, I am claiming that it <i>is</i> a computer. There's a subtle distinction here.</div><div><br /></div><div>This distinction is partly teleological. As noted before, for me it is important for a computer that computation be its primary (or at least a) purpose. Here I can foresee objections from some quarters that biological evolution is mindless and so has no purposes, but here I would just quote Dennett's writing on <a href="https://en.wikipedia.org/wiki/Intentional_stance#Dennett's_three_levels" target="_blank">the Design Stance</a>. If the heart is a pump, it is not just because it happens to push fluids around, it is because it has evolved to do so.</div><div><br /></div><div>A secondary consideration is that computers are generally engaged in information processing which is in some way about more than themselves. Bracketing <a href="http://disagreeableme.blogspot.com/2016/02/putnam-searle-and-bishop-failure-of.html" target="_blank">concerns from the likes of Putnam, Searle and Bishop</a>, a rock is most easily interpreted as computing what it is to be that precise rock, but real computers are engaged in more diverse computing tasks, which may be about prime numbers, stock prices or even how to find a mate for the organism hosting it. As discussed for the brain, what the computation is about may not be very obvious from inspection of the lowest level descriptions of the system, but when we call something a computer we do so because we have reason to believe that the objects of the computation are in some way implicit in its behaviour nevertheless.</div><div><br /></div><div>Ultimately, there may be no absolute fact of the matter on what is and is not a computer. But it makes sense to regard as actual computers anything which appears to be processing information which is about something else and especially which has this as its purpose in some sense.</div><div><br /></div><h3 style="text-align: left;">So what if the brain is a computer?</h3><div><br /></div><div>Even if I can justify the claim that the brain is a computer in a way a rock is not, it may yet seem a little anaemic. I seem to be saying little more than that the brain is processing information for the body, which is not news to most people.</div><div><br /></div><div>I don't think the claim is so empty. There are a few ways in which we can imagine the claim might be false.</div><div><ol style="text-align: left;"><li>If brains process information in ways that could not even in principle be matched by a Turing machine (e.g. Roger Penrose thinks that human mathematicians can reason in ways no computer ever could)</li><li>If brains have another primary purpose (e.g. Descartes idea that the brain is an organ which communicates with the soul via the pineal gland).</li></ol><div>These days, it's only really the first claim that is relevant. It seems that most people do agree that brains have evolved primarily to process information. I won't discuss these ideas further in this post other than to say that I disagree with them. </div><div><br /></div><div>I can see a further charge that it is reductive to call the brain simply a computer because this ignores a number of non-computational functions it provides. A slightly facetious list might be something like:</div></div><div><ol style="text-align: left;"><li>The brain acts as a gland secreting hormones</li><li>The brain's mass aids in balance</li><li>The brain keeps us from getting too overweight by burning calories</li></ol></div><div>In light of such considerations, I'm happy to concede that the brain may not be <i>just</i> a computer. But for my purposes, in discussions pertaining to philosophy of mind, by far the most salient fact about the brain is that it is a computer. This is because I believe it is by virtue of its computations that we are conscious.</div><div><br /></div><div>As an aside, I think it's still a computer even if this is not the case and panpsychism (for instance) happens to be true. The brain may be the almost miraculous site where a unified phenomenal consciousness comes together, but it hasn't evolved for this purpose. Evolution is blind to everything but behaviour. What evolution needs from the brain is just that it be a computer, so arguably this is its primary purpose even if its ability to produce qualia and phenomenal consciousness means more to us.</div>Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com13tag:blogger.com,1999:blog-5801278565856116215.post-4877800199530729992020-11-17T09:27:00.000+00:002020-11-17T09:27:32.013+00:00I was wrong about fine-tuning<p>Warning: this post will only make sense to anyone who has read my last two posts. I may take it down after a while as it's more of a reference to keep track of an ongoing discussion.</p><p>I've been playing a bit of analogy tennis with Philip Goff and Keith Frankish on twitter. You know how it goes, one person provides an analogy to explain their way of thinking, then another person proposes an alternative etc. It's been interesting, and I think I'm starting to get some clarity on how we should reason about observing unlikely events and guessing about how many chances the event had to arise.</p><p>For a start, it seems to be clear that the Total Evidence Requirement originally proposed by Roger White seems to be inadequate. It just doesn't work in general, even if it does seem to suggest the right answer in some cases.</p><p>Another outcome is that I am less certain that Goff's/White's arguments from analogy fail, as long as we grant the idea of essential identity of the universe. In fact I think I was most likely mistaken in my analysis in earlier posts.</p><p>Let's grant essential identity for now. I'm going to assume for the sake of argument that it makes sense to talk of "this universe" independently of its particulars, though I don't actually accept this. This means accepting that had this universe (U) been utterly different, and had another universe (V) been structurally identical to this one (so with its own copies of you and me and everything else we can see), then it makes sense to say that U is still this universe and V is not.</p><p>In general, for these analogies, I think we can better see what's going on if we parameterise them with variables of the form:</p><p></p><ul style="text-align: left;"><li>T: some type of improbable event</li><li>M (hypothesis): There are lots of opportunities for events of type T to occur.</li><li>S (hypothesis): There is only one (or very few) opportunities for events of type T to occur.</li><li>E1 (observation): Some event of type T has occurred</li><li>E2 (observation): Some specific opportunity has resulted in an event of type T</li><li>X: some optional selection effect</li></ul><p></p><p>This should help us to judge whether the analogies are any good or not and if not why not. In all analogies, we're not supposed to have any specific prior beliefs about M or S, or at least the issue is not our absolute belief but whether our observations cause us to update our credence one way or another or not at all. I'm not going to present these analogies in chronological order of proposal, rather I'll try to put some structure on them.</p><p><span style="font-size: medium;">Fine-tuning</span></p><p></p><ul style="text-align: left;"><li>T: A universe is improbably finely-tuned for life</li><li>M: There are many actual universes with random tuning</li><li>S: There is only one actual universe</li><li>E1: Some universe is fine-tuned</li><li>E2: This universe is fine-tuned</li><li>X: We can only observe any universe at all if E2 occurs. On M, other observers could have made observations if E1 occurred but not E2.</li></ul><div><span>Many people including cosmologists and philosophers assume the E2 should increase credence in M. Philip Goff and Roger White think it does not.</span></div><div style="font-size: large;"><br /></div><div><span style="font-size: medium;">Analogy: Monkey typing (<a href="https://twitter.com/Philip_Goff/status/1328468050560102404">from Philip Goff on twitter</a>)</span></div><div><ul style="text-align: left;"><li><span><div><span>T: </span>A monkey types an improbably coherent sentence of English due to random chance</div></span></li><li><div>M: Lots of monkeys are attempting to type sentences.</div></li><li><div>S: Only one monkey (Joey) is attempting to type sentences</div></li><li><div>E1: Some monkey has typed an improbably coherent sentence of English</div></li><li><div>E2: Joey has typed an improbably coherent sentence of English</div></li><li><div>X: Unspecified.</div></li></ul><div>The point of this analogy is not to map directly onto the case of fine-tuning but to establish some common ground. Goff suggests that surely we can all agree that it would be absurd to conclude that E2 should increase credence in M in this case. Sounds plausible, but the selection effects if any would need to be thought through.</div><div><br /></div><div>It's a bit too loose in general, lacking some of the structure of the other analogies. It's not clear why M should be lots of monkeys as opposed to lots of attempts from Joey, or lots of generic opportunities for improbable coincidences that have nothing to do with monkeys. The issue is clouded somewhat because if all that has happened is that you've seen a monkey typing, there may be more natural explanations such as that it was trained. We can stipulate that it must be chance, but when we imagine the event it's hard to take that stipulation to heart in the face of such improbable evidence, so intuitions are liable to be pushed the wrong way.</div><div><br /></div></div><span style="font-size: medium;">Analogy: Joker Scenario B (in Philip Goff's </span><a href="https://www.philipgoffphilosophy.com/uploads/1/4/4/4/14443634/is_the_fine-tuning_evidence_for_a_multiverse_.pdf" style="font-size: large;" target="_blank">paper</a><span style="font-size: medium;">)</span><br /><p></p><p></p><ul style="text-align: left;"><li>T: A monkey types an improbably coherent sentence of English due to random chance</li><li>M: Lots of monkeys are attempting to type sentences. Each observer can observe only one monkey. Your monkey is Joey.</li><li>S: Only one monkey (Joey) is attempting to type sentences</li><li>E1: Some monkey has typed an improbably coherent sentence of English</li><li>E2: Joey has typed an improbably coherent sentence of English</li><li>X: Your observation of any result at all is conditional on E2 occurring. On M, other observers could have made observations if E1 occurred but not E2.</li></ul><div>This seems to be a good analogy to fine-tuning as long as we think the universe has an essential identity. It improves on the previous analogy by adding more structure in the text explaining the setup. The analogy is not just a mere translation of fine-tuning because it helps us to see what the fine-tuning scenario looks like if we take essential identity seriously, which some do not. But it doesn't intuitively settle the question for me, so other than that it doesn't help much. It seems plausible to me that we should increase credence in M. It seems obvious to Goff and White that we should not.</div><div><br /></div><div><span style="font-size: medium;">Analogy: Joe Bloggs wins the lottery (my previous post)</span></div><div><ul style="text-align: left;"><li>T: A lottery ticket happens to have the winning numbers</li><li>M: Lots of lottery tickets were sold this week</li><li>S: One lottery ticket was sold this week</li><li>E1: Some lottery ticket has the winning numbers this week</li><li>E2: Joe Bloggs' lottery ticket has the winning numbers this week</li><li>X: You will learn of any winning lottery ticket.</li></ul></div><div>In this analogy, there is wide agreement that E2 should raise credence in M. This analogy serves to show the problems with Total Evidence Requirement (TER) proposed by White, but X seems to be disanalogous to fine-tuning so it doesn't help to resolve the argument. Keith Frankish also proposed a couple of good analogies to make this point about TER involving <a href="https://twitter.com/keithfrankish/status/1327789291292995584">umbrellas</a> and <a href="https://twitter.com/keithfrankish/status/1327959104804057088" target="_blank">penguins</a>, but they seem less relevant to fine tuning so I'll leave them out.</div><div><br /></div><div><span style="font-size: medium;">Analogy: You win the lottery (I) (my version)</span></div><div><span style="font-size: medium;"><br /></span></div><div>As before, except:</div><div><ul style="text-align: left;"><li>E2: Your lottery ticket has the winning numbers this week</li></ul></div><div>The purpose of this analogy is to illustrate that it shouldn't necessarily matter whether E2 involves the observer directly. If an analysis works from the third perspective, why not from the first? I am convinced that E2 should raise credence in M, but I'm not sure how convincing this is to others. In any case it's still not analogous to fine tuning given essential identity because the selection effect is different.</div><div><br /></div><div><span style="font-size: medium;">Analogy: You win the lottery (II) (Goff's version)</span></div><div><span style="font-size: medium;"><br /></span></div><div>As before, except:</div><div><ul style="text-align: left;"><li>X: You will only learn of a win if you win yourself. On M, other observers could have observed wins if E1 occurred but not E2.</li></ul><div>The point of this analogy is to correct my lottery analogies by bringing them back in line with fine tuning. It does so to a point, but not perfectly. X is subtly different, as you can observe your failure to win. This may or may not affect matters. Even so, I can't help but notice that if S is true some weeks, and M is true other weeks, then most if not all winners would be correct to guess M rather than S. But I suspect there may be some confounding issues here. More on this later.</div></div><div><br /></div><div>I had some analogies that were trying to push the point that the universe doesn't have an essential identity, but I want to leave those out for now because I'm more interested in understanding how we should think about cases where there is an identity or where the observer is pre-selected.</div><div><br /></div><div><span style="font-size: medium;">Revisiting the lottery analogy</span></div><div><br /></div><div>The crux of the issue for me lies in how we should interpret "You win the lottery (II)". Goff thinks that on winning the lottery, it is obvious that you should not raise your estimate of how many people entered. My initial analysis of "You win the lottery (I)" suggested otherwise. But on considering Goff's version of the analogy, I see that my analysis was based on the assumption that you would become aware of the lottery results regardless of who won, which is disanalogous to fine-tuning even if it does seem to cast doubt on TER.</div><div><br /></div><div>The lottery is a useful analogy because it occurs repeatedly, with varying numbers of participants. It's not too much of stretch to imagine some weeks having very many tickets sold and some having very few. We can push that to an unrealistic extreme by assuming that M is true some weeks and S is true others. As noted, most if not all winners will find themselves in weeks where M is true. Winners should therefore guess that M is true.</div><div><br /></div><div>This is very unintuitive though, because it seems wrong to suggest that the number of other people who bought tickets should have any effect on whether you have won a ticket yourself. Perhaps intuition is onto something.</div><div><br /></div><div>In the real world, the only way the lottery could swing between extremes with a pool of millions of players is if the decisions of players to play or not to play are correlated. It may be driven by the weather, or by the size of the jackpot, or the economy, or pandemic lockdowns, who knows. So whether other people play or not may not have much to do with why you have won so much as that you have played at all. The basic fact that you have bought a ticket suggests that M is probably true. This is enough to account for why most winners would be right to guess M. Most ticket purchasers will be right to guess M.</div><div><br /></div><div>The only way we can rule out this effect is to be sure that your decision to purchase a ticket has nothing to do with the rest of the world. The simplest way to do this is to stipulate that for some reason you are in a very special position such that you alone always buy a ticket. When S is true, you must then be the sole purchaser. It seems you are as likely to win in an S-week as an M-week, so you alone should not update your credence for M based on E2, even if other winners should.</div><div><br /></div><div>How to relate these issues back to fine-tuning? As noted previously, we already have the disanalogous point that it is possible for you to see an unsurprising outcome. It's also unclear whether we can port over our understanding of the correlations between the decisions of purchasers to play or not to play to fine-tuning.</div><div><br /></div><div>When it comes to fine-tuning, we know nothing at all about the prior probability for a universe to form. To take this back to our lottery example, this would be like not knowing how many players like you buy a ticket every single week. Though you are in a special position, you don't know you are. For this point we can scrap most of the S-week and M-week stuff and just think about the question about how special you think you are -- are you really likely to be the only player in the national lottery in a given week? It also seems to have nothing to do with winning. Maybe the best reason to believe M is the Copernican principle of mediocrity -- you probably shouldn't believe you're special without good evidence. If I translate this back to the multiverse scenario, this is just the argument that you shouldn't believe that only your own possible world is so special as to be singled out for actual existence. This may be an argument for the multiverse but it has nothing to do with fine-tuning.</div><div><br /></div><div>I think I'm ready to call it a wash or concede the point. I still think that fine-tuning suggests a multiverse, but perhaps only because of the identity issues.</div><p></p>Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com0tag:blogger.com,1999:blog-5801278565856116215.post-22507843705055675782020-11-15T06:40:00.001+00:002020-11-15T06:40:13.930+00:00More on Fine-Tuning from Philip Goff<table align="center" cellpadding="0" cellspacing="0" class="tr-caption-container" style="margin-left: auto; margin-right: auto;"><tbody><tr><td style="text-align: center;"><a href="https://pbs.twimg.com/profile_images/1300914822364758018/JeLJYjid_400x400.jpg" imageanchor="1" style="margin-left: auto; margin-right: auto;"><img border="0" data-original-height="400" data-original-width="400" src="https://pbs.twimg.com/profile_images/1300914822364758018/JeLJYjid_400x400.jpg" /></a></td></tr><tr><td class="tr-caption" style="text-align: center;">Philip Goff</td></tr></tbody></table><br /><p>To follow on from my previous post, Philip Goff has now released a <a href="https://www.philipgoffphilosophy.com/uploads/1/4/4/4/14443634/is_the_fine-tuning_evidence_for_a_multiverse_.pdf">preview</a> of an academic paper in progress on this topic. Currently titled "Is the Fine-Tuning Evidence for a
Multiverse?", this paper is a defence of the very same <a href="http://web.mit.edu/rog/www/papers/fine_tuning.pdf">Roger White paper</a> I had been critiquing, and the main argument it tackles relates to some points I made.</p><p>I thought I should give my take, but before I do I'll note that I'm not sure if the paper preview link may become stale at some future date or if the paper may be revised in ways that make my comments obsolete. In any case, if you're interested, you should read the paper and my previous post if you haven't already.</p><span><a name='more'></a></span><p><span style="font-size: large;">Overview of Goff's paper</span></p><p>I like the paper very much, in that Goff is as usual very good at explaining the issues involved and contributes a cogent and interesting argument to the understanding of this problem. His thinking seems reasonable and clear to me even though he starts out with very different initial commitments and so reaches different conclusions.</p><p>It's interesting to me how different our starting assumptions are. Generally, the pattern seems to be that I reject the meaningfulness or reality of a lot of intuitive concepts that Goff accepts, among them qualia, objective morality and the essential identity of people and things. I suspect this pattern may continue with concepts such as objective existence, objective aesthetic value etc. In Goff's book <i>Galileo's Error</i>, I was impressed by his arguments for panpsychism, but only if one is committed to the reality of qualia and the necessity of a substantive solution to Chalmers' Hard Problem of consciousness. In this paper, Goff's commitment to the notion of essential identity leads him to support White's argument against inferring a multiverse from fine tuning.</p><p>The main contribution of the paper is to address one particular criticism of White's paper presented in <a href="http://home.olemiss.edu/~namanson/This%20Universe.pdf">this paper</a> by Neil Manson and Michael Thrush. This objection appears to be similar to one I raised in my last post, namely that it's questionable to identify this universe in a way independent of its laws. What I said was:</p><blockquote><p><span face="Arial, Tahoma, Helvetica, FreeSans, sans-serif" style="background-color: white; color: #222222;">The idea that this universe could have had other laws of physics is also nonsense, as then it wouldn't have been this universe.</span></p></blockquote><p>Manson & Thrush don't put it so strongly, arguing only that</p><blockquote><p>there is no agreed account of the
nature and identity conditions of a universe</p></blockquote><p>where "identity conditions" in this context refers more or less to what criteria need to be met for this universe to remain the same universe (i.e. this universe) in counter-factual scenarios and over time. Manson & Thrush argue for caution, suggesting that White is mistaken in taking for granted that it is sensible to treat the identity of the universe in the way he does.</p><p>Goff proposes that we can do better than just shrugging our shoulders. We can make some inferences about what identity conditions might hold by looking at the physics of the most plausible multiverse scenario suggested by anthropic fine-tuning. This is the combination of eternal inflation with string theory, where each bubble universe is taken to have a random configuration of the laws of physics allowed by string theory. Goff refers to this scenario as Landscape Eternal Inflation (LEI).</p><p>Goff argues convincingly that there is little or no empirical reason to suppose that LEI is true other than (some say) fine-tuning. While there is some empirical support for an eternal inflation multiverse, there is no empirical support for the notion that each bubble universe has different laws of physics, as required by LEI. Whether we should give LEI any credence therefore depends almost solely on whether the most plausible identity conditions (what makes this universe this universe as opposed to some other) support White's argument or fine-tuning.</p><p>Manson & Thrush suggest that the most appropriate identity conditions could conceivably be what they call "Cosmic Essence", which is similar to my own view of identity (though possibly not exactly so, as we'll see). In this view, what identifies a universe is its laws of physics. It doesn't make sense to ask "what if this universe's laws of physics were different?" because then it wouldn't be this universe! It would make as much sense to ask "what if five were even?" You can't make five even and have it still be five. You can't change the laws of physics of a universe and still have it be that universe.</p><p>In my last post, I gently derided White's ideas about the relation between universes and their laws of physics as follows:</p><blockquote><p><span face="Arial, Tahoma, Helvetica, FreeSans, sans-serif" style="background-color: white; color: #222222;">Our universe was not waiting around waiting to be assigned laws of physics and wondering what laws it would get. Our universe is instead properly defined as the universe with these laws of physics, and with these people in it trying to explain fine-tuning.</span></p></blockquote><p>But Goff makes the point that I am arguably exactly wrong in these assumptions.</p><p>Following a suggestion from Manson & Thrush, Goff considers the idea that we ought to identify our universe as the universe growing out of a certain point in the eternally inflating hyperspace. It does seem to make sense to imagine that this incipient bubble universe could have happened to have had other laws of physics, and if so, it also makes sense to consider the possibility that our universe could have had other laws of physics and still have been our universe. Conversely, on this view of the conditions of identity, if another bubble had formed elsewhere in the multiverse and grown into an exact copy of this actual universe, then that would not be our universe and the people in it would not be us. It follows that while a multiverse might help explain why some universe should be fine-tuned, it wouldn't explain why this particular one is.</p><p>Having established some prima facie reason to accept this version of the identity conditions (which Goff names "Essential Origins"), Goff proceeds to give reasons to doubt that the Cosmic Essence view makes sense. I don't want to just cut and paste his arguments, which are admirably clear as they are and don't need rephrasing. But very concisely, the main issue is that Cosmic Essence is implausible because it means that any two universes which happen to be identical in internal structure are in fact the same universe on Cosmic Essence, which seems to be absurd to Goff.</p><p>However, Goff then charitably attempts to patch Cosmic Essence to see if this problem can be addressed. His suggestion is that we include the position of the universe within hyperspace to be part of its identity conditions. Now, two identical bubble universes which are in different positions within hyperspace are no longer the same universe.</p><p>Unfortunately, while this solves one problem, Goff suggests that it raises another. I think Goff over-complicates this issue somewhat, but I think it boils down to the idea that we need to grant the pre-bubble location ("the seed") of the universe an identity in order to satisfy the "location" part of the conditions of identity, but we can't consider that seed to be identical with the universe because it doesn't yet have the universe's laws of physics, and counter-factually could have had other laws of physics. So we're being somewhat extravagant and ad hoc with our assumptions here. It would be more natural, Goff argues, to regard the seed as becoming the universe rather than giving rise to the universe or being replaced by the universe, in the way that an embryo becomes a human. As such we should reject these identity conditions.</p><p><span style="font-size: medium;">Critiquing the assumption of essential identity</span></p><p><span>First off, I'm not sure why this second problem is such a problem at all. If we want to examine the analogy to human conception a little further, then we can find similar problems there. We can trace human identity back as far as a zygote Z. But that zygote can split into identical twins A and B. By Goff's own argument, twins A and B are not identical to each other, and so they cannot be identical to Z either. It follows that Z cannot simply become A or B but must be replaced by them, which is exactly the situation that Goff thinks is ad hoc and extravagent in the case of the universe seed. Whatever strategy Goff adopts to deal with such issues as they pertain to human identity could presumably be extended to the identity of universes.</span></p><p>For instance, I can imagine that Goff could argue that the fission of an embryo constitutes a discrete event where one entity is destroyed and two new entities are created. But similarly I could argue that the crystallisation of the laws of physics in a bubble universe is such a discrete event, and that the identity of the universe does not exist prior to this. If there is a moment when a human identity can spring into existence then why not the same for universes?</p><p>But while I think this problem undermines Goff's argument, this is not how I would answer the challenge myself. Rather I would do away with this notion of essential identity altogether. In this I think I perhaps differ not only from Goff but also from Manson & Thrush. Indeed, my views on this topic are possibly so radical that I expect few to agree with me.</p><p>I won't defend my views in any detail here, other than to say that I reject essential identity of persons for much the same reasons that Derek Parfit famously outlined in <i>Reasons and Persons,</i> with discussion of thought experiments such as teleportation. I've written several blog posts already explaining what I think and why. But I'll try to outline what I think very briefly, first on the issue of personal identity and then circling back to the identity of universes.</p><p>I don't think there is any fact of the matter on what makes anything the thing it is as opposed to some other thing. I think such ideas are useful human constructs and no more. As such, I reject the apparent implication from Manson & Thrush that we need to know the correct identity conditions to assess White's argument. Instead I say that there are no such identity conditions. In particular, identity conditions should not be relied upon to support arguments about issues on which there presumably is a fact of the matter, such as whether there is a multiverse.</p><p>Without essential identity we are left only with which pragmatic concepts of identity are useful to us. In everyday life, our intuitive notions of identity are adequate and are rarely challenged. But challenged they are, in some circumstances. These rare real-life challenges include scenarios such as the bifurcation of embryos into identical twins, dissociative identity disorder and the alteration of personality or memory due to brain injury. Outside of personal identity, there are issues such as the ship of Theseus paradox.</p><p>Leaving familiar real-world problems aside, some more out-there philosophical and scientific ideas not only challenge our intuitions but push them to breaking point, as Parfit showed. One particular issue that strikes me is that on the many-worlds interpretation of quantum mechanics it seems that identities of not only embryos but conscious adult humans can bifurcate without limit. Whether or not this interpretation is correct, it seems that we ought to consider how to think of identity in the many worlds scenario, just because it remains a possibility as far as we know. Again, we should not let our intuitions on the subject guide us to accept or reject this interpretation. Our intuitions don't dictate how the universe really is, they are only heuristics for navigating our ancestral environment. If many worlds is coherent even though it conflicts with our intuitions, we should be ready to rethink our intuitions.</p><p>The concept of personal identity I find most useful is the idea that I am my mind, where my mind is some combination of my experiences, ways of thinking and memories. Insofar as this mind has developed continuously, I can extend this identity backward and forwards in time. So I say I am the same person as I was yesterday, and indeed in some limited sense I share identity with the original zygote that became me. Even though I would say it didn't have a mind, its development was continuous with the development of my mind. So far, this seems pretty banal, but it has radical consequences if taken literally.</p><p>For instance If the MWI is true, my mind has not one but many futures. This means that identity can indeed bifurcate. One person can become two. Each future self will share an identity with the original. But they are not identical to each other. This appears to suggest that identity of persons is no longer quite transitive, as more rigorous mathematical notions of identity are.</p><p>All the same, we can translate personal identity to mathematical identity in a couple of ways. We could restrict ourselves to talking only about what is identical to what at a particular time. In this way of talking, the me of yesterday isn't quite identical to the me of today. Apart from having different identity conditions because we exist at different times, we're also different in other ways (e.g. I might have lost some more hair). Another approach might be to consider only the identities of the overall structure traced out by each personal identity as it bifurcates again and again over time. So now, the me on one branch of the universe is just a part of the overall identity, and the me on another branch is another part of the same identity. We're the same person, but not the same part of that person. Whatever way we want to deal with it, it seems clear to me that there's nothing incoherent about the idea that personal identity can bifurcate, even if mathematical identity is supposed to be transitive.</p><p>My view also implies that mind uploading and teleportation are not ruled out by identity-related issues (they may be ruled out for other reasons). If I can successfully mind-upload or teleport, the result is me just as much as if I had undergone some more normal transition such as having a night's sleep or travelling. If the procedure has produced a duplicate, then each instance is just as much me as if my identity had bifurcated in the MWI scenario, although the duplicates then proceed with distinct identities from that point on.</p><p>Where it gets particularly weird is when we consider the situation where more than one physical person happens to have a mind with identical experiences, memories and ways of thinking over an extended period of time. Consider the possibility that space extends infinitely. If so, then it must be the case that there are infinitely many physical copies of my body and its environment, separated very widely, perhaps in time as well as in space. If so, I sincerely believe myself to be identical to all of them or none in particular. All of them are having the same experience, and there is no fact of the matter as to which of them I am. If half of them are suddenly wiped out in some sort of spontaneous vacuum collapse, for instance, it's of no great concern to me, as my mind will continue to be instantiated elsewhere (this is similar to the idea of quantum immortality).</p><p>How should we identify the universe, then? It is just the universe in which I am instantiated. But since a physical person with my mind could be instantiated in any appropriate universe, then "this universe" is just any universe which instantiates my mind. As such I am happy to bite the bullet on the absurd conclusion that Goff rejects, namely:</p><blockquote><p>P2: Any bubble universe (actual or merely possible) with the same physics as U is
identical with U.</p></blockquote><p>This seems perfectly plausible to me, although we might want to stipulate that it's not only the laws of physics that matter but also the history or initial conditions. If two universes have the same laws of physics but different initial conditions, then they are presumably not the same universe. So, it does not in fact seem implausible to me to suppose that this universe could be instantiated many times throughout the Landscape Eternal Inflation multiverse, and that there could be more than one physical instance of U which we can deem to be the same universe with the same people within.</p><p>But do please note that I am not claiming that this is the objectively correct interpretation of identity. I claim rather that there is no objectively correct interpretation of identity because identity is a human intuitive construct that has no real bearing on reality. This is just the way of thinking about identity that I find most elegant and useful, given various thought experiments that seem to break more traditional or intuitive notions of identity.</p><p>If you find these ideas too radical, then perhaps you may have reason to accept Goff's argument. However I think that even if you accept Goff's ideas about identity, it doesn't succeed in undermining the argument for the multiverse from fine-tuning, though it does indeed neutralise the observer selection effect. Instead, accepting all of White's and Goff's assumptions about identity merely places us in the epistemic position of a lottery winner. As discussed on my last post, and again later in this post, this might indeed leave us with reason to suspect that many people had entered the lottery even as it leaves us shocked that we have won it ourselves.</p><p>In fairness to White, he anticipated something like this response. White argues that it won't do to I didn't discuss his reasons for rejecting it in my last post, but I'll do so now.</p><p><span style="font-size: medium;">The Total Evidence Requirement</span></p><p>The analogy White draws is as follows:</p><blockquote><p>Suppose I’m wondering why I feel sick today, and someone suggests that perhaps
Adam got drunk last night. I object that I have no reason to believe this hypothesis
since Adam’s drunkenness would not raise the probability of me feeling sick. But, the
reply goes, it does raise the probability that someone in the room feels sick, and we
know that this is true, since we know that you feel sick, so the fact that someone in
the room feels sick is evidence that Adam got drunk. Clearly something is wrong with
this reasoning.</p></blockquote><p>Goff dubs what has gone wrong the Total Evidence Requirement (TER). White says that "in the confirming of hypotheses, we cannot,
as a general rule, set aside a specific piece of evidence in favor of a weaker
piece." This principle may be correct, but I think the problem might be what constitutes "setting aside" a specific piece of evidence. Certainly we should never ignore any piece of evidence which might have a bearing on an issue. But some evidence is irrelevant and some is not. Where evidence is irrelevant, then it can be ignored.</p><p>The problem with the Adam example is that we are discarding evidence which is not irrelevant. If our only reason to think that Adam was drinking is that somebody is feeling sick, and yet we know that the somebody who feels sick is not Adam, then we have no particular reason to think that Adam was drinking. The evidence we are incorrectly setting aside is that the person who feels sick is not Adam. The specific piece of evidence exactly cancels out the generic piece of evidence in this case.</p><p>This is not our situation with anthropic reasoning applied to fine-tuning. Here, we infer that there are many universes from the fact that some universe is improbably fine-tuned. I claim that the specific identity of this universe is indeed irrelevant and can be set aside. Goff and White think it is not irrelevant, at least if the universe is our own, because they think that the multiverse hypothesis cannot explain why our specific universe is fine-tuned. This is supposed to undercut the motivation for the multiverse hypothesis, and as such is relevant on their view.</p><p>But there is another interpretation. First, as I said in my last blog post, and as Goff does himself, we can split our observations into two propositions. I'll quote Goff's language here as I reiterate the idea.</p><blockquote><p>E1: A universe is fine-tuned. </p></blockquote><blockquote><p>E2: U is fine-tuned (where ‘U’ rigidly designates the universe we live in).</p></blockquote><p>E1 is explained by the multiverse hypothesis, whereas E2 entails E1 (and so is partially explained by the multiverse hypothesis) but introduces further specific information which is not explained by the multiverse hypothesis. My view is that we should interpret E1 as evidence for the multiverse hypothesis, while remaining surprised by the specifics of E2 (at least if we grant for the sake of argument that we should accept the essential identity of U). I see no reason for E2 to cancel out E1 as happened in the analogy to Adam's drinking. To be analogous to Adam's drinking, we would have to be ignoring some specific reason to reject the link between E1 and the multiverse hypothesis. Such a reason might be an alternative explanation of why our specific universe is fine-tuned, e.g. hard evidence of intelligent design. But the mere fact that E2 remains surprising is not a reason to reject the link between E1 and a multiverse. As such we are not "setting aside a specific piece of evidence" E2 "in favor of a weaker piece" E1. Instead, we simplify by deriving E1 from E2 because the specifics of E2 are irrelevant in our inference to the multiverse. Conversely, the specifics of who was sick was not irrelevant to our inference on whether Adam was drinking.</p><p>This is all quite abstract, but the fallacy in White's application of TER is easy to make vivid. Again I'll refer to the lottery example. Suppose that we the chance of any particular ticket winning is very small, and is independent of the number of tickets sold. Suppose also that we have no idea how many tickets were sold. When we learn that Joe Bloggs, a stranger, has won the lottery, should we suppose that many tickets were sold or perhaps as few as one? I hope it is clear that we should suppose that many tickets were sold, because otherwise it is unlikely that anybody would have won.</p><p>And yet, the hypothesis that many people bought tickets does not explain why Joe Bloggs specifically won. According to White's reasoning, it is fallacious to ignore the problem of why Joe Bloggs won and to instead answer the question of why anyone won. The inference to the "many tickets" hypothesis would thus be a mistake. This is not quite analogous to E2 as long as the winner is a stranger. So, you can consider the case where you yourself are the winner. The only difference if you have won yourself rather than Joe Bloggs is that you have more reason to be surprised, because you presumably hold ourselves to be more salient than Joe Bloggs. There is no satisfactory explanation for this surprising event. You were just lucky. The inference to the many tickets hypothesis is unaffected.</p><p>This is the same situation we are in with regard to explaining why this specific universe is fine-tuned. The inference to the multiverse is justified even if it is surprising that this specific universe is fine-tuned. But it is only surprising that this specific universe is fine-tuned if you accept a strong metaphysical concept of essential identity, which I do not.</p>Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com6tag:blogger.com,1999:blog-5801278565856116215.post-72610490208676932492020-11-06T10:55:00.005+00:002020-11-15T06:38:50.228+00:00On Fine-Tuning and the Multiverse<p><span style="font-size: large;">The Fine-Tuning Problem</span></p><p></p><div class="separator" style="clear: both; text-align: center;"><iframe allowfullscreen="" class="BLOG_video_class" height="266" src="https://www.youtube.com/embed/I-Uq_5Lj76Q" width="320" youtube-src-id="I-Uq_5Lj76Q"></iframe></div><p><br /></p>I've been thinking a lot lately about the problem of fine-tuning and how it may or may not be resolved by appeal to a multiverse. This has been prompted by recent correspondence with the philosopher Philip Goff and by the book I am currently reading, The Beginning of Infinity by David Deutsch. I hope to get around to discussing Deutsch's ideas in future, but for now I'm focusing on an argument put forth by Roger White and endorsed by Goff.<p></p><p>The fine-tuning problem is just that the laws of physics describing the universe seem to be fine-tuned so as to allow life to exist. While trying not to be too parochial in our assumptions about what life requires to exist, at least in a universe even vaguely like ours, it seems reasonable to assume that it must at least have time and a stable environment in which to evolve, access to sources of energy and a varied assortment of building blocks from which to assemble itself.</p><p>But if the physical constants were slightly different, then perhaps stars, heavier elements or atoms could not have formed. Perhaps the universe would never have expanded from its initial hot, dense state, or would have recollapsed too quickly, or perhaps it would have expanded too quickly for particles to come together at all. When we say the universe appears to be fine-tuned, we are claiming that it appears to be delicately balanced so as to allow complex evolving self-replicators (i.e. life) to exist -- that almost any tweak we imagine to the constants or the laws of physics would result in a simpler, lifeless universe. This curious observation seems to demand an explanation.</p><span><a name='more'></a></span><p><span style="font-size: medium;">Some other responses briefly considered and rejected</span></p><p>There are arguments about whether it is actually the case that the universe is fine-tuned. Some suggest that these claims are over-blown -- that life would find a way in most imaginable universes, or that this universe can hardly be said to be fine-tuned for life when most of it is lifeless. On the other hand, some suggest that it doesn't demand an explanation in any case, that there is nothing inherently special about a universe that supports life (the thought is that we only find it special because we are ourselves living things). Discussion of these options, as interesting as they might be, is outside the scope of this article. For now, let's assume that the universe is fine-tuned and that this demands an explanation.</p><p>So how to explain it? One possibility is that the laws of physics are somehow logically necessary for reasons not yet understood. If they are not contingent, then they would not require an explanation other than their logical necessity. This seems implausible to me, not only because I can't see why alternative laws of physics should be logically impossible, but also because it seems improbably coincidental that the only logically possible laws of physics should happen to be those that support life. All the same, if it could be shown that this were the only logically possible universe, I might grudgingly have to accept that this resolves the fine-tuning problem (albeit unsatisfyingly). But until that is actually proven, I think it's fair to regard this approach as highly dubious. Furthermore, I reject the idea that fine-tuning gives us independent reason to suspect that the laws of physics are logically necessary.</p><p>Another possibility is that the universe was designed by a god or some programmer running a universe simulation. Philip Goff has suggested that the universe <a href="https://www.youtube.com/watch?v=I-Uq_5Lj76Q">might even have designed itself</a> (the same video embedded above). Whatever the nature of the designer, this general design approach is worth discussing in more detail but it's not what I want to talk about here. I'm going to assume that it is false, because there seems to be a problem of infinite regress here. We would need to account for the origins and hence fine-tuning of the designer also. It also seems to fall afoul of Occam's razor, as the existence of a designer intelligent enough and powerful enough to design and create the universe is perhaps even more in need of an explanation than fine-tuning itself. More parsimonious explanations are available.</p><p>The physicist Lee Smolin <a href="https://en.wikipedia.org/wiki/Cosmological_natural_selection">has proposed </a>that universes evolve according to an almost Darwinian process, selected for their ability to generate black holes. The formation of black holes, he proposes, may create new child universes with slight variations of the parent's laws of physics, such that universes which can create more child universes will have more descendants and so on, giving rise to a multiverse where most universes end up adapted for black holes. Meanwhile, he argues that the conditions for the formation of black holes happen to coincide with the conditions for life, such that most universes appear to be fine-tuned for life. The main problem with this approach, it seems to me, is how the whole thing got started. Without a whole lot of random variation in different versions of the laws of physics, it seems unlikely that a singular primordial universe would have had any successors. And starting with a whole lot of random variation is just the condition of having a multiverse with varying laws, which may be enough to explain fine-tuning all by itself (as we'll see). There's no need to assume that black holes can create new universes or inherit and mutate the laws of physics of their parents, so again Occam's razor is an issue.</p><p><span style="font-size: large;">The Anthropic Solution</span></p><p>My preferred solution appeals to the (weak) <a href="https://en.wikipedia.org/wiki/Anthropic_principle">anthropic principle</a>, which <a href="https://disagreeableme.blogspot.com/2012/06/anthropic-principle-goes-wild.html">I have discussed before</a>. It may so happen that there are many universes with many different versions of the laws of physics, and if this is the case then we should not be surprised to find ourselves in a universe with laws of physics which can support life. We could hardly have found ourselves in a universe with laws that cannot. This is certainly a good explanation for why our home planet is so uniquely well-suited for life, and it's natural to think it may explain the hospitality of the universe in the same way.</p><p>The only thing that stops this argument from being a clear winner is that we don't know that we really are in such a multiverse, though there may be independent reasons for believing so. First, the hypotheses of cosmic inflation and string theory taken together suggest that there may be many bubble universes with many different variations of the laws of physics. Second, I've argued elsewhere that the Mathematical Universe Hypothesis and its multiverse follows from the assumptions of naturalism, mathematical realism and computationalism (and argued in turn for each of these assumptions). But does fine-tuning itself give us reason to believe in a multiverse?</p><p>I had always assumed so, but I must say thanks to Philip Goff for alerting me to arguments to the contrary, specifically <a href="http://web.mit.edu/rog/www/papers/fine_tuning.pdf">this paper</a> by Roger White. I find the arguments therein to be interesting and tricky to disentangle, but ultimately unconvincing.</p><p><span style="font-size: large;">Roger White vs the Multiverse</span></p><p>The paper is well worth reading, and I'm not going to deal with every point therein, but in summary, I think the general problem with the arguments proposed by White is that they do not take the observer selection effect seriously enough. His various arguments and examples gloss over it in one way or another, usually by selecting the observer in some way in advance of or independently of the improbable event that is supposed to select the observer, when true observer selection effects require that the observer should only be identified by that improbable event.</p><p><span style="font-size: medium;">White's formal argument</span></p><p>The first argument White aims to show that while a multiverse might well increase the chance of some life-supporting universe existing, it doesn't do anything to explain why our particular universe should be life-supporting.</p><p>White helpfully spells out his point with a formal theorem, which is correct as far as it goes but which mischaracterises the problem. The argument goes astray because White identifies our universe independently of its laws of physics. The way he conceives of the problem, it is as if we start with a bunch of placeholder universes with no laws of physics, and our universe identified as a specific one of these. We then imagine the universes being assigned laws of physics at random, and ask for the probability that our specific universe happens to be assigned life-supporting laws.</p><p>He's right that the multiverse doesn't help much with this scenario, but I think that's because White is making mistaken assumptions about the nature of identity. <a href="https://disagreeableme.blogspot.com/search/label/identity">I've argued before</a> that there is no such thing as essential identity in the case of persons, and I think the same is true for any object, including universes. Our universe is just that universe that has our physical laws (including initial conditions). The probability that our universe has these laws is therefore simply 1. The puzzling question is not what are the chances that our universe should have these laws, but instead why should any such fine-tuned universe exist?</p><p>White is well aware of this alternative interpretation of the problem, but he thinks that the confusion is the other way around, saying "the fact that our universe is life-permitting does not confirm the Multiple Universe hypothesis one iota. Perhaps the claim that it does results from a confusion between E [our universe is life-permitting] and E′ [some universe is life-permitting]<span style="background-color: #fdfdfd; color: #202122; font-family: sans-serif; font-size: 14px;">"</span>.</p><p>To illustrate the problem, I can adapt White's argument to show that we should demand an explanation for why you were lucky enough to have been born as exceptionally intelligent as you are, given that you could have been a frog or a worm or a geranium. The number of relatively stupid living things is overwhelmingly vaster than the number of human beings, not to mention the number of human beings who take an interest in philosophical discussions like this.</p><p>(Note that I'm not looking for an explanation of how any fine-tuned minds could have arisen in the first place, so evolutionary biology is irrelevant. I'm taking it for granted that there exist a wide variety of minds of various different intelligences, just as in the multiverse scenario we take it for granted that there are a wide variety of different universes with various different capacities to support life.)</p><p>There's no doubt about it, your intelligence is incredibly fine-tuned. The probability of any arbitrarily chosen living thing being as smart as you is negligible, so we need an explanation!</p><p>The analogy to White's argument should be clear. I am making the mistake of identifying you with some placeholder for a living thing waiting to be assigned a mind, and then considering the probability that you will happen to be assigned a mind capable of following a philosophical argument. But taking the observer selection effect seriously means considering that frogs and worms and bacteria couldn't be reading this article, and reflecting on the nature of identity suggests that you are necessarily the living thing with your mind, and not a placeholder that once waited to be assigned one. The idea that you could have been a worm is nonsense, because if you were a worm you wouldn't be reading this and you wouldn't be you. As such it is not surprising at all that your mind is fine-tuned. It simply must be so. The idea that this universe could have had other laws of physics is also nonsense, as then it wouldn't have been this universe.</p><p>(On the other hand, the idea that <b>the</b> universe could have had other laws of physics is not nonsense, on the assumption that there is only one universe, and what we mean by "<b>the</b> universe" is "the universe that happens to exist" which is only contingently and not necessarily "our universe").</p><p>If it is true that the vast majority of minds are not fine-tuned, then you can infer from the fact that you have a fine-tuned mind that there are or have been lots of minds that aren't. This should be trivially obvious. Much the same reasoning can be employed to infer a multiverse from fine-tuning, at least in the absence of more parsimonious alternative explanations.</p><p>The remainder of the paper proceeds with a number of (dis)analogies making similar points.</p><p><span style="font-size: medium;">The Sleeper and the Dice</span></p><blockquote><p>Case B*: Jane knows that she is one of an unspecified number of sleepers
each of which has a unique partner who will roll a pair of dice. Each sleeper
will be woken if and only if her partner rolls a double six. Upon being
woken, Jane infers that there are several sleepers and dice rollers.</p></blockquote><p>Goff also discusses this idea in the video linked above with reference to a similar analogy where you will be killed if a particular monkey does not type a Shakespearean sentence within an hour. When you are not killed, Goff asks whether you have reason to believe that there were many people and monkeys in similar situations. This analogy is intended to be just the same as White's, only more colourful and intuitive. In this I think it succeeds, but I'll stick with White's framing as it's discussed in more depth in the linked paper and because the particular problems with it are instructive.</p><p>White claims that Jane's reasoning here is fallacious, and at first it seems so, even to me. But let's look more closely.</p><p>First, we note that White's usual problem arises again here, in that Jane has a specific identity before the experiment begins -- the identity of the observer is selected independently of the supposed observer selection effect. Let's leave that as it is, but there are some other problems with the analogy which might need some adjustment to make it more like the fine-tuning problem. Firstly, the chance of a double six is insufficiently improbable to push us very much to change our assumptions. Second, though he says "will be woken if and only if", he doesn't spell out what this means. If Jane is really never to wake again on the wrong dice roll, then he must mean we're murdering Jane. Otherwise, Jane would presumably wake up naturally at some point and observe that the dice roll had not come up favourably, and the fact that she is observing an improbable outcome could not be explained by the fact that she was observing it, because she could have observed otherwise. These issues are addressed in Goff's improved version of the analogy, where we have a very improbable event indeed and survival dependent on that event.</p><p>Let us therefore amend White's thought experiment as follows: let's say there are a thousand dice which must all come up sixes, and let's say that Jane would be murdered in her sleep otherwise. Under these circumstances, I claim that Jane might be forgiven for making some strange inferences on waking.</p><p>If I were Jane, and if I believed that the experiment had been carried out as described, then I would indeed infer that there were several sleepers. In particular, I might attribute my survival to the multiverse of quantum mechanics, and assume that countless of my doppelgangers had been murdered. Only on the many worlds interpretation (MWI) of quantum mechanics is my survival explicable (and indeed, guaranteed), on the reasonable assumption that the variance of quantum mechanics can influence dice rolls.</p><p>Despite the impossible odds, on the MWI I have nothing to be surprised about, because in helping myself to the multiverse of quantum mechanics, I negate the problem of identifying the observer in advance. Jane is no longer a unique individual identified in advance, but once more just once of a fungible class of initially identical Janes (for more on this, David Deutsch has an illuminating account of quantum fungibility in The Beginning of Infinity, where he discusses the MWI in terms of infinite initially identical universes which diverge rather than one initial universe which splits). I have no reason to be surprised that I have survived, because one of me was bound to and I as the observer of the survival am necessarily the one that happened to survive. The anthropic principle is therefore available to me to explain everything about my survival, at least from my perspective.</p><p>But perhaps this is cheating. Suppose I don't believe in parallel universes. We might first need to reduce the number of dice, because with a thousand dice survival becomes so improbable that I think any rational person would be forced to believe in parallel universes by the arguments above, or at least to doubt their own understanding of the experiment (or sanity!). So let's say there are 6 dice. The odds of winning are now 1 in 46,656, so unlikely enough to give us pause but not insuperable.</p><p>I think it will be useful to break Jane's observation into two surprising propositions in need of explanation.</p><p></p><ol style="text-align: left;"><li>Someone has survived</li><li>I have survived</li></ol><div>Unintuitive as it may seem, I do actually think that proposition 1 justifies Jane's inference that there must have been very many participants in the experiment.</div><div><br /></div><div>If we consider it first from a third-party point of view this is perhaps easier to see. First, we should rule out the possibility that we are only hearing of the experiment because of an improbable outcome. Otherwise, an improbable outcome is not so surprising, because the reference class we need to consider are all events which we might have learned of because of improbable outcomes. Hearing that someone has survived in these circumstances only allows us to conclude that there must be many opportunities for improbable outcomes of diverse events, and not that there must have been many sleepers. So we should stipulate that we are aware of and specially interested in this experiment in advance and that we expect to hear of the outcome. However, we have no idea in advance how many sleepers there are. In these circumstances, when we learn that there was at least one survivor, I think it correct to infer that there must have been thousands of potential participants. Otherwise it's too surprising that anybody survived.</div><div><br /></div><div>The second proposition is more problematic. I agree with White that the multi-sleeper hypothesis does not help us here. The problem arises in part because of White's disanalogous move of selecting the observer in advance. Jane remembers and identifies with the Jane anticipating the experiment, who would reasonably have predicted that she would not survive. This prediction is defeated, so Jane is surprised. Her position is much like a lottery winner -- it is utterly unsurprising to any neutral third party that someone has won the lottery (assuming that lots of people enter), but utterly shocking to the winner herself. I think that's just too bad. There really is no good explanation that should satisfy Jane as to why she should turn out to be the winner, and her surprise is warranted.</div><div><br /></div><div>And yet I maintain that a lottery winner should believe in the hypothesis that lots of people have bought a ticket just because she knows that somebody has won. Assuming that the numbers of people buying lottery tickets vary wildly from week to week, winners who guess that their week had many participants rather than few will usually be correct, because there are more such winners. On weeks where very few people bought lottery tickets, there will usually be no winners. On weeks with lots of competitors, there may be more than one winner.</div><div><br /></div><div>I think the problem in explaining the second proposition is masking the import of the first for White and Goff, and I think this is the ultimate source of their mistake.</div><div><br /></div><div>The rest of the paper continues in this vein, with similar reasoning leading to similar (in my view) incorrect conclusions. However there's one more analogy I think worth discussing.</div><div><br /></div><div><span style="font-size: medium;">The forest shooting</span></div><div><br /></div><div>This is an analogy originally from John Leslie, where he discusses the idea that you are in a forest when without warning you find yourself being shot. You may either be alone in the forest or in a crowd, and for some reason (at least on White's version) you don't know which. This strains credulity, but let's go with it. Suppose you have two competing hypotheses to explain your shooting. Either a stray bullet from a hunter shot you by accident (this kind of thing being more likely to happen to somebody if there is a crowd), or an assassin was out to get you. What should you believe, and does your shooting give you reason to think you may be in a crowd?</div><div><br /></div><div>In this case, the situation is altered because of the availability of the design (assassin) hypothesis in addition to the anthropic (crowd) hypothesis. It might indeed make more sense to believe that someone is out to get you, and not to believe that you are part of a crowd. Being part of a crowd explains proposition 1 (that somebody got shot), but it doesn't help with proposition 2 (that the person shot was you), while the assassin hypothesis explains both. The most rational hypothesis therefore is a matter of perspective. A neutral third party might be inclined to pick the crowd hypothesis, but from your unique perspective it might be more rational to pick the assassin hypothesis.</div><div><br /></div><div>We can modify the lottery scenario similarly by introducing an analogous competing hypothesis. Perhaps your apparent lottery win can be explained by a mistake, hallucination, dream or psychosis. Depending on the odds of winning the lottery and how far you've gone to verify that the win is genuine, this might in fact be a better explanation, but only from the perspective of the winner herself. This echoes what I said about Jane's epistemic position when she has been the beneficiary of a thousand dice rolls all coming up six. In these extreme circumstances, she should not necessarily conclude that there have been many sleepers with distinct identities from herself. She should instead believe that she has been deceived, or that there is a quantum multiverse, or that God has intervened, or simply that she has lost her marbles.</div><div><br /></div><div>Fortunately, we are not in this position with respect to fine-tuning, because there is no proposition 2 to consider. Our universe was not waiting around waiting to be assigned laws of physics and wondering what laws it would get. Our universe is instead properly defined as the universe with these laws of physics, and with these people in it trying to explain fine-tuning. In these circumstances, we need only consider proposition 1, why any universe at all is life-permitting. It makes no sense to ask why our universe (i.e. the universe with these laws of physics) should be the universe that got these laws of physics. Goff and White claim that it is intelligible to think that some other universe might have got these laws instead, and that people like us might then live in these other universes instead, but they wouldn't be us, because they wouldn't be in our universe. I think that's backwards. If this universe had not had these laws, then it wouldn't be this universe. If that other universe had these laws and had people in it identical to us, then that would be our universe and we would simply be those people. To believe otherwise is to believe that there is something to pick us out as distinct from people who are ex hypothesi identical to us. I just don't buy into that kind of essential identity.</div><div><br /></div><div><span style="font-size: large;">Conclusion</span></div><div><br /></div><div>I don't claim that fine-tuning by itself proves that there is a multiverse. There are a number of other possible explanations for and responses to the fine-tuning problem. These ideas should be judged and compared with each other and with the anthropic principle in terms of coherence, parsimony etc. I think the anthropic principle does rather well in this regard, your mileage may vary. Reasonable people can disagree on which answer works best.</div><div><br /></div><div>But the idea that a multiverse does nothing at all to explain fine-tuning is a clear misake. </div><p></p><p></p>Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com3tag:blogger.com,1999:blog-5801278565856116215.post-38452068863839941592016-03-08T12:49:00.000+00:002016-03-08T12:49:07.541+00:00Rescuing Computationalism with Platonism<div dir="ltr" style="text-align: left;" trbidi="on">
In my <a href="http://disagreeableme.blogspot.co.uk/2016/02/putnam-searle-and-bishop-failure-of.html">last post</a> I discussed some issues with identifying objectively which computations a physical system could legitimately be interpreted as instantiating. Computationalism is usually taken to be the view that all it takes to create a conscious mind is to implement the right computation, so the idea that we can't tell objectively when a computation is implemented implies either that there is no objective fact of the matter regarding when minds exist, that all minds exist (everywhere) or that no minds exist. None of these conclusions is particularly appealing!<br />
<br />
I find the arguments discussed on the last post to be somewhat persuasive. Indeed, I had had similar concerns before becoming aware of these. What's more, I think the problem may be worse than even Putnam, Searle and Bishop have suggested.<br />
<br />
<a name='more'></a>I'm going to use variations on Searle's <a href="http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-chinese-room.html">Chinese Room</a> thought experiment to illustrate some of these related problems just because the image of a man manually performing a computation is one we can all picture relatively clearly.<br />
<br />
Firstly, let's acknowledge Searle's usual conclusion, that if he is the system and he doesn't understand Chinese, then the system doesn't understand Chinese. The most promising computationalist response is that Searle's brain is actually creating two minds operating at different levels. This implies that there are at least two interpretations of Searle's brain as implementing a mind and they describe very different minds. So far so good, I suppose, but it does at least demonstrate the point that there may be many different ways to interpret a physical system as a mind and so it's wrong to assume that a physical system is identical to a mind or even that it has a one-to-one relationship with a mind. Whatever a mind is, then, it isn't quite a physical thing. Minds are not brains, in other words. If we are very strict physicalists and if we think our minds and our consciousnesses really objectively exist (as opposed to being interpreted to exist, a pattern we project onto the world like particular genes or institutions or whatever), this may be a problem.<br />
<br />
Next, consider what happens when Searle stops executing his program and goes home to sleep for the night, mid-execution. While Searle is sleeping, does the Chinese Room mind still exist? The Chinese Room program isn't in any meaningful sense executing. It isn't really instantiated anywhere.<br />
<br />
You could regard it as paused, a mind held in suspense. The state of the algorithm will at least be recorded physically somewhere. But what is the difference, I wonder, between such a mind and that of a potentially conscious algorithm written on paper but never executed at all? Perhaps all written algorithms are essentially computations in suspense? If that is so, then simply writing an algorithm out on paper must be enough to bring a suspended mind into existence. Whatever about whether we can tell whether a given system <i>implements</i> a given algorithm, it seems clear that there can be no fact of the matter about whether a physical system <i>represents</i> the code of an algorithm (because it is not hard to invent an encoding by which any physical artifact could represent any text), so even if the rocks around us don't host active "pixies" (see the last post if you don't get what I mean by "pixies"), they might at least host pixies in suspended animation! For reasons such as these, I don't think it is plausible to regard a paused computation as having a continuous physical existence.<br />
<br />
So perhaps the mind doesn't exist while the computation is paused. But it doesn't seem to me that there is much of a difference between Searle pausing for the night and the necessary momentary pause between each step of Searle's calculation. Does a mind therefore flicker in and out of existence as he works? Is each new instance a new mind or the same mind revived? How could we even pin down to specific points of time when the mind exists and when it does not, when all we have to go on are steps in a computation? At any given point of time, it doesn't seem to me we can really say whether Searle is actively computing right now or whether he is in between computing steps. I don't think there is a fact of the matter about whether pausing to pick up a pen so as to make a note is actually part of the computation or not. For this reason, I don't think we can really say precisely when computing steps begin or end, because the physical actions corresponding to computational steps are not terribly well defined. If the steps of a computation don't really exist for well-defined time spans, it's hard to say how the mind can either.<br />
<br />
So even though we may some day have an objective way of telling <i>whether</i> a given computation is being instantiated, we still have the problem of identifying <i>when</i> it is instantiated. It seems to me there is no fact of the matter in this case either.<br />
<br />
There's another set of issues when we consider the same computation being performed in different places or at different times. Let's say Searle has a partner who performs every calculation with him to check for any error that might occur. If we regard his partner as part of the same system, then one mind is brought into existence. If we regard his partner as a parallel system, then a two minds are brought into existence. In general, it seems to me that in many cases we are free to divide up a computational system in a number of ways so that the number of copies of a mind in existence is whatever we want it to be. For instance, the electrons flowing through a computer could perhaps be divided into a thousand distinct sets, each of which parallels the computation of interest.<br />
<br />
We might be tempted to simply adopt the rule that these are all simply part of the same computation and so the same mind. It seems most parsimonious. But what if we offset the duplication in time a little? Say Searle's partner takes the night shift and duplicates Searle's efforts from earlier in the day. Or say he is working a thousand years after Searle has already ceased his efforts. Wouldn't it seem a little weird now to regard them as part of the same computation, that one mind could be physically and temporally so widely distributed? But, again, it's perhaps not so easy to define an objective cut-off point so that we could really satisfyingly distinguish two parts of the same computation creating one mind from two distinct identical computations creating two identical minds.<br />
<br />
So, to add to the concerns of <i>whether</i> and <i>when</i> a given mind is instantiated, we have to grapple with the question of <i>how many</i> times a mind is instantiated in the case of computations we could interpret either as being duplicated or as singletons.<br />
<br />
Obviously, I have raised a large number of questions, perhaps too many to be discussed all in one go. Many of these questions may have plausible answers. My feeling, however, is that these issues can never be satisfyingly dealt with. To me, the whole idea of objectively telling whether or when or how many times a given computation is executing is hopeless, and if this is the case then we cannot say that minds exist because of physical implementations of computations. Searle really is right that computation is observer relative, that no objective fact about the physical world can depend on whether a computation is instantiated.<br />
<br />
So where does that leave computationalism?<br />
<br />
<h3 style="text-align: left;">
Platonism to the rescue!</h3>
<div>
I think <a href="http://disagreeableme.blogspot.co.uk/2013/10/mathematical-platonism-is-true-because.html">Platonism</a> resolves these and many other problems. On Platonism, a running computer program is just an instantiation of an algorithm, and an algorithm is a mathematical object which exists necessarily and timelessly even if it is never instantiated. If we allow ourselves to adopt the mental toolkit of Platonism, then it becomes possible to attribute consciousness not to a physical implementation of an algorithm but to the algorithm itself (or at least to an abstract run of an algorithm on particular input). In this view, it's not such a problem that there is no fact of the matter regarding whether physical systems instantiate a computation or not. No metaphysical fact depends on how we answer that question.</div>
<div>
<br /></div>
<div>
At first, this appears to be absurd. Am I really saying that it doesn't matter if a mind is physically instantiated, it exists regardless?</div>
<div>
<br /></div>
<div>
Yes, that is my view, but there's a lot of explaining to do about how this actually works and how it differs from belief in souls and the like. Much of the rest of this post is therefore dedicated to discussion what it means for a mind to exist Platonically, considering issues such as death, brain damage, identity and so on.</div>
<br />
When I say that a given mind exists, if we want to think of that mind as having conscious experience, then we need to think about it processing information of some kind. So we need to define its inputs, its environment and so on. Different inputs yield different "biographies" for the mind. In each different environment, it has a different life story, will think different thoughts, will develop differently and so on. For our purposes, we should consider each life story to be a different mathematical object. So when we ask how a mind (say my mind) can exist without being instantiated, we are really asking how my mind and its environment can exist without being instantiated. For instance, if life on earth had been wiped out billions of years ago, how is it that I and all my fellow humans could still exist?<br />
<br />
Well, even if this had happened, there would still be a mathematical object isomorphic to the status quo we observe. A mathematician of infinite patience and capacity could in principle explore this mathematical object (e.g. by simulating it) and learn what transpires within. That mathematical object defines every event within our world, every word that we utter, every thought that we think. If our conscious experience is made up of every thought that we think, then we would still be conscious even if we only existed within this mathematical object.<br />
<br />
But, since I'm saying it is the algorithm itself that realises consciousness and not any particular implementation, it doesn't matter if the mathematician never simulates it. We would regard ourselves as conscious beings and the environment around us as physical regardless. The intriguing possibility arises that this is in fact the case -- that this is not a physical world but a world which only exists as a mathematical object. Furthermore, this may be the case for <i>all</i> worlds. There may be no physical world and indeed the very concept of an objectively physical world may be meaningless. This, of course, is just the <a href="http://disagreeableme.blogspot.co.uk/2013/12/the-universe-is-made-of-mathematics.html">Mathematical Universe Hypothesis</a> I have discussed previously.<br />
<br />
On this view, simulating an algorithm does not bring anything new (not conscious entities anyway) into existence. Rather it provides a window onto another world. So if you accidentally pull the plug on your simulation of a virtual world populated by conscious beings, don't worry, you have not committed genocide!<br />
<br />
But what if you shoot a person in the head? Have you not committed murder? I would say you have, not necessarily because you cause that person to cease to exist but because you have removed him or her from causal contact with his or her loved ones. You have become the cause of grief and suffering and so your act is morally impermissable for this reason (on most conceptions of morality at least).<br />
<br />
Whether you have actually caused the person to cease to exist is an interesting question. If the world were completely deterministic, then arguably you have terminated that mind. If we define the mind of the victim as the life story of that mind in this world, then it necessarily has to end at this point. It exists at no time after this event within the mathematical object of our world. The input to that mind after this point is undefined, and so the further life story of that mind is undefined, and so we should say it does not exist at times after the shooting. Of course, the proximate cause of this can be attributed to your action, so you bear responsibility for the death (on compatibilist accounts of free-will and responsibility at least).<br />
<br />
But of course one cannot destroy a mathematical object, so how could one destroy a mind if a mind is a mathematical object? From a mathematical Platonist view, actually you haven't really destroyed the mind even if you have terminated its life story. Its life story, including every moment of experience, every thought it ever entertained and so on continues to exist just as a biography on the shelf does. In this view, the shooting constitutes a terminus to the story rather than a destruction of the story itself. There is no way to destroy a person utterly just as there is no way to destroy a mathematical object because there is no way to prevent a person from having existed in the first place.<br />
<br />
Now, if the world is not completely deterministic (as appears to be the case), then at each point there are many possible futures. If there exist possible futures where the person was not killed, then there is another sense in which the shooting does not cause them to cease to exist. Though it may cause them to cease to exist in your world from that moment on -- they continue to live on in another possible future, effectively a parallel universe. Even if the world is deterministic after all, as long as there is some mathematical object which corresponds to that person continuing to live (e.g. in an alternative world that was identical from their limited perspective but subtly different in just the right way to prevent the shooting from happening), then in effect they can be regarded as continuing to live in that world.<br />
<br />
Moving on, what of the relationship between the mind and brain? Changes to the brain clearly affect the mind, so how can I say the mind is independent of the brain?<br />
<br />
There are (at least) two ways to view this. If you define the mind in conjunction with its environment as one object, then altering the state of the brain (e.g. with an injury or an electric probe) is in effect just another kind of input and so the mathematical object already incorporates the change. If you want to draw a border around the mind and treat it as a mathematical object in its own right, treating only sensory data as input, then a brain injury corresponds to instantiating an alternative mathematical object in its place. Rather than providing input to an algorithm you are replacing it with an alternative as we do when we rewrite and recompile a computer program. Either way of looking at this is fine with me.<br />
<br />
On this view, the brain doesn't really create the mind, rather it taps into it and exploits it in order to navigate the world. In the same way, bees didn't really invent the hexagonal tiling of the plane. This is a solution which exists timelessly and Platonically. But bees did, through a process of evolution by natural selection, stumble into this tiling and exploit it because it is efficient. The mind is just a vastly more complex pattern that our ancestors stumbled into. Rather than causing the mind to exist, the brain should be regarded as a medium for the algorithm of the mind to interact with its environment, just as a computer is a medium that allows a computer program to take input and output but (on Platonism at least) does not cause the algorithm itself to exist. Destroying the brain therefore removes that mind from causal contact with its environment (the world), making a material difference to the world but not really to the mind/algorithm itself.<br />
<br />
<h3 style="text-align: left;">
Getting back to the pixies</h3>
<div>
Recall Mark Bishop's argument that if to be conscious is to instantiate a certain class of computation, and that if there are ways to interpret any object as instantiating any computation, then all objects must be instantiating all kinds of conscious experience. There are an infinite number of "pixies" living in every rock (and so computationalism is absurd and can be rejected).</div>
<div>
<br /></div>
<div>
How does my view of Platonic computationalism help us?</div>
<div>
<br /></div>
<div>
Unfortunately, it seems we are stuck with the pixies -- if all mathematical objects exist, and if algorithms are mathematical objects, and if all it takes to be conscious is to be a certain algorithm, then each of Bishop's pixies must exist. Perhaps we're back where we started.</div>
<div>
<br /></div>
<div>
Actually, I don't think so. What I find absurd about the DwP argument is not so much that the pixies exist but that they exist in every rock, and so that we should be attributing consciousness to the rocks themselves. But, on Platonic computationalism, the rocks are not conscious and the pixies are not located in the rocks. Indeed, they are not located anywhere within our coordinate system. If they are located anywhere, it is only within a coordinate system local to their own environment, which is part of the mathematical object or algorithm in which they find themselves embedded. From our perspective they are abstract, just as from their perspective we are abstract. Destroying the rock has no impact on the pixies because they were never inside it in the first place.</div>
<div>
<br /></div>
<div>
I don't find it absurd that the pixies exist because I am already committed to the Mathematical Universe Hypothesis, which predicts the existence of all possible minds somewhere within the mathematical multiverse. Bishop's pixies correspond therefore to observers in other universes causally disconnected from our own. Since they are not in our universe, we don't need to concern ourselves with them. For all practical purposes, they do not exist from our perspective.</div>
<div>
<br /></div>
<div>
I say they are in other universes rather than in our own (despite the possibility of finding a Putnam-style mapping to show them reflected in our universe) because they are causally disconnected from us. Nothing we can do can affect them or they us, until that is we build a system of input/output which would allow us to interact with them. At that point, we can consider them and their world to be embedded in our own universe, but to do so we would have to build a very complex machine, effectively a supercomputer. Now, it hardly contradicts computationalism to suppose that a mind can be brought into our universe by a supercomputer, so this conclusion meshes well with the computationalist viewpoint.</div>
<div>
<br /></div>
<div>
<div>
Furthermore, on the DwP argument, it would seem that there are distinct identical copies of each pixie in every rock. On Platonism, these identical copies are just instances of the same mathematical object, of which there is only one. This also strikes me as less absurd.</div>
<div>
<br /></div>
</div>
<div>
Bishop has suggested to me that the MUH is a kind of panpsychism, but I disagree. To me, panpsychism is the view that consciousness is all around us, pervading the universe, so that even elementary particles can be considered to be conscious in some way. But my view is rather that matter is never conscious, that consciousness is rather a property of abstract structures. Consciousness is not all around us -- frankly, it doesn't physically exist anywhere in our universe. We are conscious but we (or at least our minds) are not physical things, we are instead the algorithms our brains are most usefully interpreted to be computing as they take sensory input, process it and output motor commands. To me, this is anything but panpsychism. It might be called a kind of dualism, but <a href="http://disagreeableme.blogspot.co.uk/2013/09/dualism-is-not-dirty-word.html">as I have argued before</a>, dualism need not be a dirty word, and in any case, I'm really a monist because I think that even the so-called "physical" world is actually an abstract mathematical structure.</div>
<div>
<br /></div>
<h3 style="text-align: left;">
Conclusion</h3>
<div>
For reasons I have discussed before, I think computationalism has to be correct. That is, if we were to build an AI that was just as capable as we are, and particularly if it were modelled after the processes going on in human brains, we would be obliged to think of it as conscious in the same way that we are. But this does not mean that we have to attribute consciousness to the physical object of the computer. Neither do we need to attribute consciousness to the physical brain. The assumption from many computationalists that consciousness is a physical phenomenon, that there is a fact of the matter regarding when and where consciousness is instantiated, is in my view untenable. Rather we need to see consciousness as a property of certain abstract structures. It is useful and practical to regard the brain and the computer as instantiating such structures, but there need not be a fact of the matter because these structures exist Platonically regardless of whether they are instantiated or not.</div>
</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com109tag:blogger.com,1999:blog-5801278565856116215.post-49044747776823892442016-02-24T10:05:00.001+00:002022-04-19T21:46:29.100+01:00Putnam, Searle and Bishop: The Failure of Physicalist Computationalism<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<iframe allowfullscreen="" class="YOUTUBE-iframe-video" data-thumbnail-src="https://i.ytimg.com/vi/qUNHyPFvUWc/0.jpg" frameborder="0" height="266" src="https://www.youtube.com/embed/qUNHyPFvUWc?feature=player_embedded" width="320"></iframe></div>
<br />
<br />
I wanted to come out of blog dormancy to write up my thoughts on what I feel is a very important argument against computationalism. The argument advances the view that there is no objective fact of the matter about which computations a physical system is computing, and if this is the case it would certainly seem to problematise computationalism (the view that what it is to be a conscious mind is just to perform the right kind of computation).<br />
<br />
In this post I will explain the argument and some of the common responses to it. I'll reserve my own response (which is quite different from that of most computationalists) for a future post.<br />
<a name='more'></a><br />
The basic idea is not so new, going back at least to the late 80's and early 90's with arguments from Putnam[1] and Searle[2]. Searle neatly captures it as follows:<br />
<blockquote>
On the standard textbook definition of computation, it is hard to see how to
avoid the following results:<br />
<ol>
<li>For any object there is some description of that object such that under that
description the object is a digital computer.</li>
<li>For any program and for any sufficiently complex object, there is some
description of the object under which it is implementing the program.</li>
</ol>
Thus for
example the wall behind my back is right now implementing the Wordstar
program, because there is some pattern of molecule movements that is
isomorphic with the formal structure of Wordstar. But if the wall is
implementing Wordstar, then if it is a big enough wall it is implementing any
program, including any program implemented in the brain. (pp. 208-9)</blockquote>
<br />
Searle's treatment of the problem is left rather vague. Perhaps many computationalists would be happy to dismiss such an idea as preposterous or absurd. However, Putnam justified in detail a very similar but more precise claim: that any open physical system can be interpreted as implementing any <a href="https://en.wikipedia.org/wiki/Finite-state_machine">finite state automaton</a> (FSA), where FSA is presumably more or less Searle's textbook definition of computation.<br />
<br />
Because Putnam's treatment is so rigorous, let's focus on that for now and begin with an introduction to Finite State Automata (feel free to skip ahead if you are already familiar with the concept).<br />
<br />
<h3 style="text-align: left;">
Finite State Automata</h3>
Firstly, I think it's important to point out that the FSA model is strictly less powerful than that of the Turing machine, but only because it has a finite set of possible states while the Turing machine has an infinite amount of tape to work with. As such, it ought to be possible to model any realizable digital computation with an FSA, and indeed it could be said to be a more realistic model than the Turing machine precisely because the set of states is finite.
<br />
<br />
An FSA is always in precisely one state at a time, which means that right from the outset it diverges quite significantly from how programmers would tend to think of algorithms and computation, where state is really a complex vector composed of variables and data structures. Conversely, the state of an FSA doesn't really have any content, only simple rules regarding how and when it transitions to the next state and what output to produce in each state.<br />
<br />
Let's make this a little more concrete, with an unrealistic toy example of a shop system where if the value of the float in the till is less than £20 and the till takes cash, then we need to note that we are short of cash in that till in order to pay out change. A programmer might say something like "IF $till.float < 20 AND $till.type = 'CASH' THEN SET $till.shortOfCash = true", where the dollar signs denote variables being read from and written to. However, in an FSA there are no such variables. Something like this logic would instead be "state A goes to state B", where state A corresponds to the state where the till has less than £20 and accepts cash but we have not yet noted that the till is short of cash, while state B is the state where the till has less than £20 and accepts cash but we have recorded that the till is short of cash.<br />
<br />
Suppose that when a till is short of cash a warning light flashes. In the programmer's model, this light would be associated with a variable such as $till.shortOfCash. In the FSA model we would just associate this output with certain states (such as B) and not others (such as A).
<br />
<br />
The number of variables programmers use in any reasonably complex software is quite large, and the number of possible values each variable could hold is frankly enormous. When you consider the number of combinatorial possibilities we have for all these variables, the number of possible states quickly expands to ridiculous proportions. So in most computations, the number of actual states you would need to account for in your FSA state transition tables is <i>vast</i> (to put it mildly). This is why even though the FSA is useful as a mathematical abstraction of computation it is worthless as a programming paradigm.<br />
<br />
All the same, it is not hard to see that any computation can be described as an FSA. At any given time, a physical computer is in a particular state, that is all of its registers and memory addresses have particular values, and the ensemble of values constitute an identifiable state we can label. Forgetting input for a moment, the next state the computer visits and whatever output it might produced is completely determined by the current state. An FSA model would just list all possible states by their labels and what successor states and output they produce. This ought to be enough to capture anything a computer could ever do. Even input can be incorporated if we simply treat it as part of the state. As such, if any algorithm can pass the Turing Test and do all the information processing tasks a human brain can do, then so could an FSA.<br />
<br />
<i>(</i><b>Aside:</b><i> Searle is often ridiculed for implying that his Chinese Room thought experiment could be implemented with something like a big lookup table matching questions with responses, but consideration of the FSA model implies that in principle he could. With the right FSA table, every interaction with the room would put the FSA in a novel state and so he could mimic the ability of a conventional algorithm to learn and change and give different responses to the same input at different times. Right away we start to see that conventional computationalist attitudes such as faith in the Turing Test as a detector of consciousness are in trouble, because it is very hard to see how something as simple as a big lookup table could produce consciousness. All Searle would need to do to answer a question is to look up a big dictionary for his current state (he would have one such dictionary for each state) mapping input Chinese to output Chinese and identifying the next state. It is often assumed that Searle would need to take a long time simulating neurons and so on, but if he has these dictionaries then that is not so. There is a tradeoff of time versus memory, though, because the number of dictionaries he would need and the size of each dictionary is frankly absurd.)</i><br />
<i><br /></i>
<br />
<h3 style="text-align: left;">
Putnam's Rock</h3>
I've discussed how we can interpret a computer as implementing an FSA. The problem for computationalists is that we can perform much the same kind of interpretation on any open physical system (e.g. a rock) and map this to any FSA we like! Like a computer, the state of any physical system is defined as the ensemble of microstates we can define in the system (instead of registers, we might use the disposition and charges and momenta of all the atoms in the system). And like a computer, each state causally depends on preceding states. If we call the state at time 0 A, and the state at time 1 B, then we have shown that the physical system implements the FSA "A goes to B" and so it can be said to perform the example till-related computation we had above. All that's missing is the output, however if we had the ability to scan the microstate of the physical system then producing the output corresponding to that logical FSA state would not be difficult.<br />
<br />
Besides, I'm not convinced that either input or output need be of much concern. We can imagine a computer simulation of a rich virtual environment hosting putatively conscious AIs but which has no input or output. If we can map the states of this computation to the states of a physical system such as a rock, how can we justify our intuition that the computer really is running the computation (and so hosting conscious entities) but the rock is not?<br />
<br />
There are a number of angles one could use to question Putnam's result, some more successful than others. I'll get to those a little later, but for now I'll just note that, superficially at least, Putnam would appear to be correct. If to instantiate a computation is just to be describable as the right kind of FSA, then it would appear that every physical system is performing any computation you could care to mention.<br />
<br />
<h3 style="text-align: left;">
Bishop's Pixies</h3>
<div>
John Mark Bishop has published a number of papers ([3],[4],[5],[6]) which expound on the implications of Putnam's result and related arguments, also answering some challenges that have been raised. They are well worth a read. If you don't have the time, the video above may be of interest.</div>
<div>
<br /></div>
<div>
Bishop interprets Putnam's result to mean that computationalism demands that every physical system is host to a multitude of conscious minds (which he humorously refers to as 'pixies'), by simple virtue of their natural evolution through a succession of distinct states. Since a computationalist believes that to be a conscious mind is just to implement the right kind of computation, and since any physical system is implementing any and all computations simultaneously (depending only on how you interpret it), then all possible conscious minds must be instantiated simultaneously in every rock. For Bishop, this is the most absurd kind of panpsychist position imaginable and so demonstrates that computationalism must be false.</div>
<div>
<br /></div>
<div>
I'm not sure that Bishop really adds much to Putnam's original argument, but he has done a great job of explaining it and advocating it in recent times, as well as taking an interest in addressing objections. For these contributions he is to be commended.</div>
<div>
<br /></div>
<div>
Bishop concludes with Searle and Putnam that computation is very much in the eye of the beholder, that there is no objective fact of the matter about which computations a system may or may not be implementing. To illustrate this point, Bishop raises the example of a truth table in digital logic such as the following.</div>
<div>
<br /></div>
<table style="text-align: center;">
<tbody>
<tr>
<th>A</th>
<th>B</th>
<th>Output</th>
</tr>
<tr>
<td>0</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>0</td>
<td>1</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>0</td>
<td>0</td>
</tr>
<tr>
<td>1</td>
<td>1</td>
<td>1</td>
</tr>
</tbody></table>
<div>
<div style="text-align: center;">
<br /></div>
</div>
Bishop asks: which logical operation does this truth table correspond to? If you know anything about digital logic, you might be inclined to say that this is the truth table for AND. But this assumes that 0 represents FALSE and 1 represents TRUE, which is only a convention. We could equally interpret it the other way around, and now this would no longer be the table for AND but for OR. When it comes to interpreting what physical logic circuits are doing, we don't see TRUE or FALSE anywhere, only conventions of this nature regarding which voltages (or potentially other physical properties) are interpreted as TRUE and which are interpreted as FALSE. Since we can't even say what logical primitive each gate is implementing, it seems hopeless to suggest that there could be a fact of the matter for the system as a whole.<br />
<br />
However, though computation seems to be in the mind of the observer, it is almost universally agreed that there is an objective fact of the matter about whether certain physical systems (such as human beings) are conscious or not.<br />
<br />
I should emphasise here that we are not talking about subtle gradations between categories. Many of us are comfortable with the idea that there is a grey area in the spectrum from trivial unconscious information processing in bacteria to the complex conscious information processing in humans. The point is whether we can look at a system and objectively place it somewhere along the spectrum. For example, it is usually assumed that a healthy alert adult human is definitively at the conscious end of that spectrum.<br />
<br />
An objective fact cannot be explained by a subjective interpretation. It is not plausible, for instance, that a virus could evolve which would kill only beautiful people and leave ugly people alone. Neither is it plausible that it would take less energy to broadcast good TV than bad TV. The very idea is a kind of category error, because people may legitimately disagree on who is beautiful or ugly, what TV is good or bad, but who lives or dies or how much energy is required for broadcasting are objective facts about which well-informed people should not disagree. In the same way, it is not plausible that we can explain the objective fact of consciousness by appeal to subjective interpretations of systems as computers.<br />
<br />
<h3 style="text-align: left;">
Computationalist Responses</h3>
<div style="text-align: left;">
A number of responses from computationalists have attempted to resolve the problem, some more successfully than others, in my view. I will mention some of those I find interesting. The first three of these come from <a href="http://selfawarepatterns.com/2016/02/04/panpsychism-and-definitions-of-consciousness/comment-page-1/">conversation</a> with a computationalist friend of mine (Mike Smith over at <a href="https://www.blogger.com/www.selfawarepatterns.com">SelfAwarePatterns</a>). To be honest, I don't think they represent tenable philosophical answers to the problem but they probably are representative of attitudes that may be common among the lay community of computationalists and perhaps explain why there are still so many computationalists around despite the DwP argument.<br />
<br /></div>
<h4 style="text-align: left;">
Accepting the pixies</h4>
Mike at one point suggested that the best thing to do might be to accept the existence of the pixies. After all, there is little reason to suspect that our intuition in such matters is likely to be very useful. If it seems absurd, so what? Nature is under no obligation to be sensible. It certainly doesn't seem to be very sensible when it comes to fields such as quantum mechanics, so why should this be any different?<br />
<br />
Bishop for his part is happy to let that speak for itself. If you are really willing to defend a view so bizarre, good luck to you! But most computationalists (and I with them) do not think that this is acceptable. If Bishop's "Dancing With Pixies" (DwP) argument is correct, not only are we surrounded by pixies, but the vast majority of conscious experience is realised in pixies, and so we ourselves are almost certainly pixies. Furthermore, though there must be a real world, we can say almost nothing about it. There is no reason to believe it bears any resemblance at all to our apparent environment, and indeed it may be something as trivial as a pair of particles separating from each other forever. Any system which has identifiable non-repeating states is a candidate.<br />
<br />
Though we can't absolutely rule this out as a possibility, it is my view so absurd as to overrule any possible reasons for clinging to computationalism.<br />
<br />
<h4 style="text-align: left;">
Rejecting the pixies</h4>
<div>
Mike was also happy to suggest that it might be best to reject the pixies out of hand, on the basis that the interpretation of a natural system as implementing a particular algorithm is too absurd. On this view, it is really the interpretation that is doing all the work and so it is not too worrying to suppose that we can force such interpretations if we wish -- the act of forcing the unbelievably complex interpretation would be what realises the computation and brings the pixies to life.</div>
<div>
<br /></div>
<div>
I don't think this is a satisfactory answer because the absurdity of the interpretation shouldn't matter if we never have to instantiate it for the pixies to exist, and according to the logic of the DwP argument they should exist even without such an interpretation. Actually building the interpretation itself is just to build a very complex input/output apparatus, but the computation should be taking place regardless.</div>
<div>
<br /></div>
<div>
If you want to say otherwise, then you should be able to draw a sharp distinction between what a rock is doing and what a computer without input/output is doing. The point of the DwP argument is that no such sharp distinctions are possible. There are only degrees of absurdity/complexity versus naturalness/simplicity in our interpretations.</div>
<div>
<br /></div>
<div>
Without an objective way to quantify or measure absurdity of interpretation, and without a natural threshold to mark the border between actual computations and Bishop's absurd phantom computations, it seems we're in trouble. Again, if we take the existence of conscious minds to be objective fact, and if we rule out the idea that a given conscious mind (one like yours or mine) could half-exist, then it's hard to see how something like a computation which exists only to a certain degree (according to the naturalness of an interpretation or the usefulness of regarding it as a computation) can account for the absolute existence of a mind.<br />
<br /></div>
<h4 style="text-align: left;">
The subjective existence of mentality</h4>
<div>
Another approach mooted by Mike was to adopt the view that since a conscious mind only really exists from its own point of view, it is wrong to assume there is an objective fact of the matter about whether a conscious mind exists in a given physical system. If the existence of a mind were subjective, then it might not be such an issue that the existence of a computation is observer-relative.<br />
<br />
I don't think many computationalists will be satisfied with this approach. David Chalmers (perhaps echoing Descartes and "I think therefore I am") likes to point out that the one thing any of us can really be confident in is the fact of our own conscious experience. For this reason, it is not plausible to suggest that we don't really objectively exist or that our consciousness is an illusion. Conversely, if all it is for a mind to exist is for it to exist from its own perspective, then we must accept the first-class existence of fictional minds such as those of Han Solo or Mickey Mouse, something few computationalists would be willing to do.<br />
<br />
But perhaps we should interpret "subjective" in this sense to mean only private. There is only one perspective that can ultimately observe a conscious mind to exist, and that is that conscious mind itself. But there is still a (hidden, presumably) objective fact of the matter that the mind exists, even if this fact is not public. But now we're back where we started, with a private objective fact depending for its existence on a public subjective interpretation, an impossible scenario which simply doesn't work.<br />
<br />
<h4 style="text-align: left;">
Douglas Hofstadter - A posteriori cheating</h4>
Moving on to what other academics have said on these issues, Douglas Hofstadter (a man I admire quite a bit) has suggested [7] that the kind of approach Putnam takes to mapping FSAs onto a physical system is cheating because it requires a priori knowledge of the evolution of the algorithm, something which in fact can only be known a posteriori after running the algorithm. This is not a real mapping of physical states to logical states, according to Hofstadter. A real mapping would be one we could produce a priori as we do for computers. without having to run the algorithm. This points to a potential difference between Putnam's mappings and the kind of mapping we naturally adopt for computers, a difference that could account for consciousness, perhaps.<br />
<br />
Or at least that's what I feel Hofstadter is trying to say. I don't think Bishop interprets him quite right because Bishop's response is to argue that there is nothing stopping us from running an algorithm twice on the same input. The first time, we don't know what the algorithm will do and so we might suppose that a particular algorithm produces consciousness. The second time we run it, we do know exactly what it will do, but of course it must still be conscious -- the mere fact that we know what it will do cannot change this, surely. If knowing what it will do in advance doesn't rob the computer simulation of consciousness, why should it rob Searle's wall of the same?<br />
<br />
But of course, running the algorithm twice is only admitting Hofstadter's problem. We can determine mappings for computers without having to do that, but not for walls or other natural objects. On the other hand, it would seem that the logic of the DwP only depends on the logical necessity of the existence of such a mapping, so whether we are in a position to tell what it is in advance may not be important. Even so, if this epistemological difference could be made precise, it might constitute an objective difference in the two kinds of mapping, and that's all we need as grounds for distinguishing "real" computations from "pixie" computations.<br />
<br />
Even though this feels to me like it might be the start of a legitimate objection, perhaps Bishop and I are giving Hofstadter too much credit, as his argument is really not very clear. Indeed, the particular passage this criticism comes from doesn't give much in the way of actual argument at all. What we find instead is rhetoric, little more than a list of incredulous comparisons to reading works of literature or classical music in the random structure of natural objects around us, intended to ridicule the very idea of interpreting natural systems as instantiating arbitrary computations. (In fairness to Hofstadter, he is responding to Searle's vague rhetorical argument rather than Putnam's more precise formal argument, so this kind of response is not unreasonable). In this respect, I think Hofstadter's criticism misses the mark because no objective metaphysical fact hinges on whether "the holes in pieces of Swiss cheese code for the entire history of the United States". Yes, a mapping does exist to make this true, and yes, this mapping is absurd and arbitrary, but nobody thinks the instantiating of a representation of the history of the United States objectively brings something qualitatively new (such as a mind) into existence, so this is not a fair comparison. We are not troubled by our inability to say definitively when and where an account of the history of the United States is instantiated, but we ought to be troubled by our inability to say definitively when and where conscious algorithms are instantiated.<br />
<br />
However Hofstadter does point the way to a more substantive objection when he says "minds worth calling minds exist only where sophisticated representational systems exist, and no describable mapping that remains constant in time will reveal a self-updating representational system in a car engine or a liver".<br />
<br />
This idea of a stable mapping that remains constant in time is perhaps a plausible angle to investigate. I think what he's getting at here is that we should not admit mappings that are so arbitrary as to only map particular runs of an algorithm on particular inputs to a particular period in the evolution of a physical system. Legitimate mappings should be general and extend to any possible inputs and for an indefinite period of time in the evolution of a physical system (and not just a bracketed temporal window). This kind of objection is taken up in more detail by David Chalmers.<br />
<br />
<h4 style="text-align: left;">
David Chalmers - counterfactuals and CSAs</h4>
David Chalmers wrote a very detailed and thoughtful response to Putnam in his paper <a href="http://consc.net/papers/rock.html">Does a Rock Implement Every Finite-State Automaton?</a> [8]. The paper covers a lot of ground and is highly recommended, but for our purposes the most interesting idea is that a Putnam style mapping fails to be a genuine computation for one reason or another.<br />
<br />
One important reason is that an actual digital computer is adaptive and dynamic. It is capable of performing computations not on only on the input it actually receives, but it would also have performed sensible computations on counter-factual input that it didn't receive. Putnam's mappings, on the other hand, are brittle. They only cater for one particular series of inputs, one particular run of a program, and have undefined behaviour in other circumstances.<br />
<br />
Another side of this coin is that Putnam's mappings are brittle with respect to the physical circumstances of the system. Putnam mappings can only be made retrospectively on evolutions of systems that have already happened and been recorded. Counterfactually, had the system's state diverged even a little from that specified in Putnam's mapping, then Putnam's mapping would fall apart. The mappings we have for actual computers are robust in that we can say things like "had the voltage in this register been such and such, then that would have corresponded to such and such a logical state".<br />
<br />
In other words the relationship between physical states and logical states for computers is lawful and robust, in that they cater for a very wide number of logical and physical circumstances (as long as the computer remains intact, at least) but Putnam mappings are brittle and depend entirely on happenstance. This being the case, it would seem we have an objective difference between the two and so perhaps Putnam's phantom computations should not be regarded as genuine.<br />
<br />
To answer this objection, Bishop cleverly makes use of a version of Chalmers' own Fading Qualia Argument (FQA). The original FQA was deployed to illustrate the absurdity of biological chauvinism by postulating that brain cells might be replaced iteratively by electronic functional analogues. If we assume that electronics cannot be conscious, we start with a conscious being and we end with an unconscious being that behaves in precisely the same way. It would seem that during this process, qualia somehow fade out gradually, so that at the halfway point the being is only half conscious, only half-perceiving sensory qualia and so on, while being unaware that anything untoward is happening. Chalmers (correctly, in my view) takes this to be absurd, and concludes that consciousness must be a functional phenomenon.<br />
<br />
Bishop's version of the FQA considers a different transition, that of a robot or simulated entity transitioning from full dynamic implementation of an algorithm to one scripted to proceed deterministically and inevitably from one state to the next (just like a Putnam FSA in a rock). At each step in the transition, we simply replace a single conditional branch with a hardcoded state change, so that the transition is perfectly smooth. Again, the behaviour of the algorithm run on the same input is unchanged, and again we are apparently left with the absurd conclusion that the half-way point has a being that is half conscious, half perceiving qualia and so on.<br />
<br />
For what it's worth, I don't think this version of the FQA is quite as absurd as Chalmers' original. I can imagine the half way point might consist of an entity that flickers between consciousness and zombiehood as it alternates between executing conditional and hardcoded state transitions. It's still strange, but not quite as weird as imagining what it would be like to half-perceive qualia while being fully functionally aware of them.<br />
<br />
Chalmers (and also Chrisley [10]) points out that this need not be so surprising. Deleting these conditionals will inevitably correspond to a physical change in the system, and this difference might make the difference in considering whether consciousness is brought forth.<br />
<br />
To this point, Bishop answers that we can instead imagine leaving all the conditional statements in place and instead simply delete (or replace with null operations) the code that will not execute. Now, Bishop argues, the code that actually executes is the same and so we can no longer appeal to a physical change in the system to explain why consciousness might fade.<br />
<br />
I'm not sure this argument succeeds for a couple of reasons. Firstly, I don't think it is possible to delete code without having a physical difference, and as long as there is a physical difference in the system then it is possible to point to that as accounting for the difference between conscious and unconscious systems. Secondly, it's not clear to me that a system with conditionals but with deleted code really corresponds very well to a Putnam style FSA which has no conditionals at all. On the other hand, however, it's hard to credit that the physical presence of code never executed is crucial for consciousness.<br />
<br />
All in all, I'm left with the impression that this whole line of argument is inconclusive. Despite the loose ends, my sympathy actually lies with Bishop but I think reasonable people might with some justification disagree on whether he has proven his case.<br />
<br />
However, Chalmers also presents a related argument which to the best of my knowledge Bishop has not yet addressed. This is the argument that the FSA is just one model of a computation and not necessarily the best one for our purposes. That is, to implement an FSA may not really be all that is required to perform a genuine computation (despite assumptions to the contrary apparently dating back to Turing). To justify this, Chalmers appeals to that difference I noted earlier between the abstract FSA and how computer programs are implemented in practice -- that is that the state of an actual computer has fine-grained content, typically divided into variables and stack pointers and the like, and that there exist meaningful, lawful and causal relations between these sub-states. This is not the case for an FSA where state is associated only with a label and transitions to other states. Chalmers points out that it is possible to build an abstract model of computation that respects not only state transitions but also the content of particular states, and labels this model the Combinatorial State Automaton (CSA). In contrast with FSAs, it is not clear at this time that it is possible to build a mapping between natural physical systems and arbitrary CSAs. If it is not possible, then the computationalist can claim that what it is to be a conscious entity is to implement the right kind of CSA, and that brains and the right kind of AI might do so while rocks would not.<br />
<br />
Chrisley [9] has made similar points, arguing that Putnam's account of causality is too weak, and that unlike a real computer, the state transitions in his projected FSAs are not strongly causal, that is, the physical system being in logical state A does not really cause its transition into logical state B. Again, the gist seems to be that simply implementing (or being interpretable as) an FSA is not enough to be a genuine computation.<br />
<br />
I think these kinds of argument are plausible, but, as noted, it does depend on the impossibility of mapping natural systems to arbitrary CSAs or better accounting for causality in such mappings. In my quote from Searle, he doesn't assume an FSA mapping, and he doesn't assume that counterfactuals are ignored. He only assumes (albeit without justification) that there is some mapping between the state of the system and the operation of an algorithm. It's entirely possible that he is right, no matter what model of computation we adopt or how much stock we place in the mapping of counterfactuals. Searle and his followers take it for granted that his assumption is correct and most computationalists would seem to assume that it is false. Again, my sympathies on this one probably lie with Searle, but it's far from a knock-down argument against computationalism.<br />
<br />
<h3 style="text-align: left;">
Conclusion</h3>
<div>
On balance, I think Putnam, Searle and Bishop have a point. I think there is a problem with computationalism as usually conceived. And yet I still call myself a computationalist! There is a way to accept all these arguments and reconcile them with computationalism without (quite!) accepting the existence of pixies in rocks and walls. I'll explain that on my next post, whenever I get time to write it!</div><div><br /></div><div>[That post is now <a href="http://disagreeableme.blogspot.com/2016/03/rescuing-computationalism-with-platonism.html">here</a>]</div>
<div>
<br /></div>
<h3 style="text-align: left;">
References</h3>
<ol style="text-align: left;">
<li>Putnam, Hilary (1987). Representation and Reality. MIT Press.</li>
<li>Searle, John R. (1992). The Rediscovery of the Mind. MIT Press.</li>
<li>Bishop, John Mark (2003). Dancing with pixies: Strong artificial intelligence and panpsychism. In John M. Preston & Michael A. Bishop (eds.), Views Into the Chinese Room: New Essays on Searle and Artificial Intelligence. Oxford University Press </li>
<li>Bishop, John Mark (2002). Counterfactuals cannot count: A rejoinder to David Chalmers. Consciousness and Cognition 11 (4):642-52. </li>
<li>Bishop, John Mark (2009). Why computers can't feel pain. Minds and Machines 19 (4):507-516.</li>
<li>Bishop, John Mark (2009). A Cognitive Computation Fallacy? Cognition, Computations and Panpsychism. Cognitive Computation 1 (3): 221-33</li>
<li>Hofstadter, D.R. & Dennett, D.C. (eds.) (1981). The Mind's I: Fantasies and Reflections on Self and Soul. New York, Basic Books (Chapter 22).</li>
<li>Chalmers, David J. (1996). Does a rock implement every finite-state automaton? Synthese 108 (3):309-33.</li>
<li>Chrisley R. Why everything doesn’t realize every computation.
Minds Mach. 1995; 4:403–20. </li>
<li>Chrisley R. Counterfactual computational vehicles of consciousness.
Toward a science of consciousness. April 4–8 2006.
Tucson Convention Center, Tucson, AZ, USA; 2006.</li>
</ol>
<br />
<h2 style="text-align: left;">
<br />
</h2>
</div>
</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com143tag:blogger.com,1999:blog-5801278565856116215.post-65441236239876165652014-06-09T00:46:00.001+01:002014-06-09T00:46:50.246+01:00Book Review: Longing to Know<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="http://veritasdomain.files.wordpress.com/2011/10/longing-to-know-esther-meek.jpg?w=500" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://veritasdomain.files.wordpress.com/2011/10/longing-to-know-esther-meek.jpg?w=500" height="200" width="134" /></a></div>
As a pretty convinced atheist, I am not the target audience for Longing to Know, by Esther Lightcap Meek, which seeks to explain a view of knowledge in general, but in particular of how we might come to know God. When it was nominated for discussion by my philosophy reading group, I hoped that I would at least get some interesting discussion of epistemology out of it, and to an extent I did, but my experience of the book could be more broadly characterised by frustration and a Longing to Go (away and do something else).<br />
<br />
<br />
<a name='more'></a><br />
<br />
My exasperation is not necessarily the fault of the author. We could not expect most theists to enjoy reading The God Delusion, so it is not surprising that I found the book to be rather unsuited to me. I can at least praise Meek's clear language and pleasant, welcoming attitude. I get the sense that if I were to have a conversation with her I would find her a charming and intelligent person. Though she would certainly disagree with me on almost everything I suspect she would do so in a thoughtful and friendly manner.<br />
<br />
To begin with, the author devotes a few chapters to establishing points I actually do agree with. The question of whether God exists is an important one that should concern every thoughtful person. Truth is not relative - truth claims are usually either correct or incorrect and we are not all entitled to our own reality. Knowledge does not require certainty, indeed it is unreasonable to be certain of pretty much anything (Am I sure about that? No!).<br />
<br />
However, in attacking certainty, the author thinks she is illustrating a problem with the more traditional Platonic conception of knowledge as <a href="http://en.wikipedia.org/wiki/Justified_true_belief">justified true belief</a> (JTB). Justified truth, we are to understand, implies that nothing is known unless we are certain. While Meek does give examples, such as Descartes's attempt to build up his worldview from the certainty of the statement "I think, therefore I am", this is not my understanding of JTB at all. Justification does not have to eradicate all doubt, it merely has to establish a high probability of a certain belief being true. Likewise, though knowledge does have to be true, this does not mean that we must be certain of its truth. We can therefore know without being certain that our knowledge is actually knowledge. As such, while I agree that certainty is a red herring, I see no major problem with JTB in this regard.<br />
<br />
This is not the only issue Meek has with JTB. She also argues that JTB is too dry, reducing human experience to a meaningless set of linguistic statements. When we learn to play the piano, we are not acquiring justified true beliefs, but we are acquiring the knowledge of how to play. When we come to know a person well, this knowledge cannot ever be fully captured in a set of sentences. If a picture is worth a thousand words, then lived experience is worth much more.<br />
<br />
In my view, Meek is equivocating. There are at least three types of knowledge, it seems to me, and it is an accident of history that these are given the same name in English. In my view, factual knowledge, familiarity and skill know-how are different things. Factual knowledge and familiarity in particular are identified by distinct terms in most languages in my experience, e.g. <i>savoir</i> and <i>connaître</i> in French. JTB is about factual knowledge (<i>savoir</i>). Getting to know a person or a place is familiarity (<i>connaître</i>). Learning to play the piano is acquiring skill know-how. By explaining how JTB fails to capture familiarity or skill acquisition, Meek is criticising it for failing to do a job it was never intended for.<br />
<br />
Coming to know God has aspects that can be understood in the context of each type of knowledge. For truth claims about God, such as that He exists, that He loves us, that He created the world and so on, JTB is in my view just fine. Can we know God in this way? I would say no, because I don't think such claims can be justified and as such I doubt that they are true.<br />
<br />
Can we know God as we know a person? Perhaps we could if He existed. Even if He does not, if God is an illusion, then perhaps it makes sense to say that He can be a familiar illusion. In this case, what you come to know is an aspect of your own mind, not something in the external world. I'm not sure that this is what Meek means, of course.<br />
<br />
What of the acquisition of skills? For skills such as piano playing or juggling, we get objective feedback on our errors allowing us to correct our mistakes and make progress towards an ideal of mastery. When it comes to skills such as biblical <a href="http://en.wikipedia.org/wiki/Exegesis">exegesis</a> or perceiving the signs of God around us, I worry that the only external feedback comes from a community of believers all too eager to perpetuate what may be false beliefs. A believer well-practiced in the process of believing may be seeing a truth to which I am blind, but it seems more probable to my prejudiced viewpoint that they have become expert self-deceivers. <a href="http://en.wikipedia.org/wiki/Pareidolia">Pareidolia</a> is an all too real psychological phenomenon, and it could be that believers such as Meek are cultivating nothing more than the ability to pick out faces in the clouds.<br />
<br />
Since I do not accept the premise that God exists, much less that the Bible is a reliable guide to knowing anything at all (except for some mythology and dubious history) much of the book is irrelevant to me, and what remains is, frankly, boring.<br />
<br />
Meek explains at length, with many anecdotes and allusions to Star Wars and The Hunt For Red October, various aspects of what it is to know people, learn truths and acquire skills. It is explained at length that we can have doubts, beliefs, sudden realisations or a more gradual growth of confidence. We are reminded that we have a responsibility to seek truth, that knowledge rewards us, that knowledge is refined or discarded as we improve our understanding. Sometimes we are misled or come to believe what we want to be true instead of what is true. We often need to be guided towards the truth by teachers or the written word.<br />
<br />
In my view, these observations are not exactly revelatory. They are part of the ordinary experience of knowing, and the fact that Meek devotes a chapter each to pretty much all of these familiar aspects of knowing (and more!) makes reading the book a tedious, monotonous chore for me. If we removed all the anecdotes, allusions to popular culture and metaphors from the book, it seems to me that every important detail of Meek's thesis would fill only a couple of pages. Whatever good ideas there may be are too heavily diluted with unnecessary examples to make reading the book worth the effort.<br />
<br />
Throughout the book, Meek uses the allegory of her automechanic Jeff to expound on how these ideas apply to coming to know a person, and by extension to knowing God. This approach is not particularly helpful for me. I already know what it is to know a person, so Meek's extensive examples and anecdotes tell me nothing new. It might be worth the trouble if there were a payoff, but unfortunately I don't think that the allegory works. Jeff is a physical person who Meek has met and interacted with on multiple occasions, but Meek only knows of God through study of an ancient text of doubtful authenticity and through unconvincing inferences from her observations of the world around her. So, though Meek has explained to me (at length!) how it is that she knows Jeff the automechanic, I remain entirely unconvinced that knowing God is the same thing or even possible.<br />
<div>
<br /></div>
The few arguments Meek presents to support the claim that God exists or that the Bible is accurate are nothing new and often circular.<br />
<br />
God must exist, we are told, because there could be no other possible explanation of the sense of human morality. She doesn't bother to address the view that morality has its roots in our evolutionary history as social primates and thousands of years of civilisation and philosophical development.<br />
<br />
The Bible must be the word of God because it is so incredibly accurate in describing the human condition, in particular because we are 'bent' with original sin. There is no mention of the rather obvious point that whatever the Bible gets right in this regard could be put down to ad hoc just-so storytelling based on direct observation of human folly and weakness. No miraculous insight is required.<br />
<br />
The Bible is frequently quoted in support of some point or other, which may be persuasive to her Christian readers but not, I suspect, to anyone else.<br />
<br />
Though it was not an enlightening read for me, the book certainly has its admirers. If you are a Christian, and particularly if you have faith in the Bible and if you have never given a moment's thought to your experience of knowing, you might find much of value in this book. I find it unlikely that any agnostics will be swayed. If you are an atheist (and perhaps even if you are a thoughtful Christian) I doubt this book has much to offer you.</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com7tag:blogger.com,1999:blog-5801278565856116215.post-75974068457712344502014-05-01T10:01:00.000+01:002014-05-01T10:13:36.631+01:00The Ontological Argument<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoYIZ35Wdmy1j8uUJqVqYK-CpzO-2pOh5PbCFqXDPD1ya7-OuJyaSlGgWgj_DDJKKNYwLTZzdK_2C8V7wAKwzoXn1eQ-YG_2WJbNtT12dNOrNzb0SLy0kVmmj6cdysyGa1FCBiWK9uZJc/s1600/ontologicalargument.png" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEhoYIZ35Wdmy1j8uUJqVqYK-CpzO-2pOh5PbCFqXDPD1ya7-OuJyaSlGgWgj_DDJKKNYwLTZzdK_2C8V7wAKwzoXn1eQ-YG_2WJbNtT12dNOrNzb0SLy0kVmmj6cdysyGa1FCBiWK9uZJc/s1600/ontologicalargument.png" height="221" width="400" /></a></div>
A frustrating interruption of Internet service has rendered me unable to work, so I might as well put my time to good use by catching up on blogging on the assumption that I will be able to upload this text at some later time. It also means, unfortunately, that I am unable to use references as I write. As such, I might get some things wrong.<br />
<br />
The issue I want to address is the so-called Ontological Argument for the existence of God as first proposed by St. Anselm almost a thousand yeasrs go and further developed and promoted by Muslim and Christian philosophers, including relatively recent versions by Alvin Plantinga, Kurt Gödel and others.<br />
<br />
I was asked to write about this some time ago by a fellow commenter on Massimo Pigliucci's (now no longer active) blog 'Rationally Speaking', and I said I would. However I have been slow to do so for a couple of reasons (in addition to the usual procrastination!). The first is that the argument is so obviously nutty that it seems to be scarcely worth the time to address it. The second is that it is actually quite difficult to point out precisely what is wrong with it!<br />
<br />
<a name='more'></a>I will do my best to reproduce the argument as well as I can. It goes something like the following, although my version is somewhat simplified.<br />
<br />
God is defined as a maximally great being.<br />
It is greater to exist than not to exist.<br />
Therefore any maximally great being must exist.<br />
Therefore, by definition, God must exist.<br />
<br />
This is often expanded to talk about existing in all possible worlds. So not only does God exist, he must exist in all possible worlds. A world without God is therefore logically incoherent.<br />
<br />
My brain rejects this argument instinctively. It simply cannot be right. But why, exactly?<br />
<br />
My first reaction is that a similar argument could be used to prove the existence of just about anything. I can define 'foo' as a bar of gold on my desk which exists in all possible worlds. Using similar arguments to those proposed by St Anselm, it seems that foo must exist, by definition, and yet, astoundingly, I fail to find it. From this, I conclude that existence in all possible worlds is generally an incoherent idea. Perhaps the only things that must exist in all possible worlds are the worlds themselves.<br />
<br />
It think one answer that proponents of the ontological argument might have for me is that I'm missing something by failing to deal with the concept of maximal greatness in my analogy. I do not think this is the case: we can conceive of a maximally great beer that must exist in all possible refrigerators and see that dealing with maximal greatness directly does not make the situation much better.<br />
<br />
Another way of looking at the problem is that we have two concepts which are mutually inconsistent: a God that exists in all possible worlds, and a Godless possible world. One of these concepts must be incoherent. Proponents of the ontological argument simply prefer the former, while opponents prefer the latter. Nothing is yet proven either way, so the ontological argument fails.<br />
<br />
However, I find it hard to see what grounds there are to think that a Godless possible world is an incoherent concept. It seems on the face of it to be a perfectly reasonable proposition. The only argument I have seen against it is the ontological argument, but to use that would be circular.<br />
<br />
I'm a software developer by trade, and in my view one very good way to find out if some idea is incoherent is to try to model it with code. As it happens, we run computer simulations of possible worlds all the time, often with no concept of God. When playing computer games or running scientific simulations of physical processes, I have heard of programs crashing because of attempts to divide by zero, but never have I seen an error log mentioning a GodNullException on line 42.<br />
<br />
If I define a unicorn as a magical horselike creature with a horn on its head, I don't need to append "which exists" to this definition. No purpose is served by this meaningless elaboration. I first define "unicorn" and only then assert that unicorns exist or not. Unless a concept is logically necessary, existence is a contingent predicate, not a baked-in assumption.<br />
<br />
I think on the whole it is better not to make the existence of a concept part of the definition of that concept. Breaking this rule can lead to the kind of logical mistakes evident in the ontological argument.<br />
<div>
<br /></div>
</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com39tag:blogger.com,1999:blog-5801278565856116215.post-76861867893760779292014-04-15T21:43:00.000+01:002014-04-15T21:43:13.548+01:00The Moral Landscape Challenge<div dir="ltr" style="text-align: left;" trbidi="on">
<i>This is my entry to Sam Harris's Moral Landscape Challenge. Needless to say, it didn't win, but I'm reasonably happy with it nonetheless.</i><br />
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="http://www.randomhouse.com.au/content/titles/9780552776387.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" src="http://www.randomhouse.com.au/content/titles/9780552776387.jpg" height="320" width="209" /></a></div>
<br />The Moral Landscape (TML) is engagingly written and cleverly argued. Harris starts with the assumption that morality concerns maximising the well-being of conscious creatures (let’s call this Harris’s axiom). Much of what follows is laudable, but there are unavoidable philosophical problems with the notion that science can determine human values. Yes, science can in principle give us answers we can use to improve the human condition. Fully embracing Harris’s axiom, this is the application of science to standard consequentialism and subject to all the same philosophical criticisms.<br /><br />It is also a relatively trivial idea, and hardly new. If we are to take TML seriously, we must assume it makes a more profound claim: that there are usually objectively correct answers to moral dilemmas and that science can find them.<div dir="ltr" style="line-height: 1.15; margin-bottom: 0pt; margin-top: 0pt;">
</div>
<a name='more'></a><br />
The most serious objections to this claim, (The Value, Persuasion and Measurement Problems) have already been raised and I will not reiterate them here. To these I would add The Aggregation Problem: that it is not clear how ‘well-being’ ought to be aggregated across a population of individuals. Ought we maximise average or total well-being, and how concerned ought we be with inequality of distribution?<br /><br /><div>
Rather than rehash points covered elsewhere, this short essay will focus on some of Harris’s counter-arguments.<br /><br />The analogy to medicine is most effective. By substituting ‘health’ for ‘well-being’ and ‘specific individuals’ for ‘conscious creatures’ in his axiom, Harris argues that any objections to his proposed science of morality must also apply to medicine, and so that there is a double-standard at play.<br /><br />However, the science of medicine does not seek to maximise health, nor does it claim to identify objectively correct health choices. Instead, it studies how human biology can be manipulated to attain desired goals. If we do X, we will get result Y. If we do A, we get result B. When faced with the choice of undertaking a futile course of chemotherapy or accepting a quick death without, we do not meaninglessly ask which option maximises health. Medicine as a science gives us useful information, but it is up to us to decide which outcomes we prefer.<br /></div>
<div>
For medical dilemmas, the patient usually has the final say, but moral values usually pertain to how people deal with each other. More than one viewpoint is relevant, and these often clash. The claim that science can determine human values is therefore the claim that science can resolve these disputes, but it cannot, no more than medicine can tell us whether we ought to value longevity or quality of life.<br /><br />To counter the Value Problem, Harris argues that we can simply define morality to fit his axiom, just as we define certain terms in physics. We should not be overly concerned if this definition is not adopted by laymen for the same reasons that we do not much care if mystics have an idiosyncratic interpretation of ‘energy’. This move parallels that of compatibilists in redefining ‘free will’ to conform to determinism or naturalism. In the context of the free will debate, Harris rejects this tactic as “changing the subject” while apparently oblivious to the fact that he is doing much the same here. ‘Morality’ already has clear meaning as the idea that there is a specific set of values we ought to embrace. Without a convincing demonstration that the foundation of his values is correct, Harris has simply changed the subject.<br /><br />A weaker counter-argument arises from the conflation of two distinct roles for values in science: ideals which benefit the conduct of science and values as the objects of scientific scrutiny. For Harris, the fact that science values “evidence, logical consistency, parsimony, and other intellectual virtues” is intended to persuade us that science might be able to identify objectively correct moral values. This equivocative argument is obviously flawed, but brevity does not permit a deeper discussion here.<br /><br />The argument from worst possible misery deserves a mention. TML discusses a hypothetical state where everybody is living in abject misery and argues that we can all agree that it is certainly good to avoid this. It is therefore established that there are cases where right and wrong are uncontroversial and self-evident. By extrapolation, there really are right and wrong answers to moral questions. However, we can also find wide agreement that the taste of apple juice is preferable to that of urine. The conclusion that aesthetic preferences are objective truths does not follow. As Harris acknowledges, there is little reason to see moral preferences as anything more than aesthetic, albeit with higher stakes. If so, objective morality is on shaky ground.<br /><br />‘The Moral Landscape’ refers to a ‘landscape’ in parameter space forming a map of the possible ways our values could be configured so as to maximise well-being. There may be many peaks corresponding to utopian societies, and many valleys corresponding to universal misery. Harris admits that sometimes different peaks may be objectively equally viable, but the problems inherent in defining, quantifying and aggregating well-being indicate that we may not even be working from the same map. If, even given perfect empirical understanding, we still cannot agree on the relative heights of two peaks, then the profound interpretation of TML is defeated and only a trivial and uninteresting argument remains.<br /><br />Most agree that child abuse is bad and that compassion is good. If the approach of TML is to be relevant it can only be in resolving the more intractable moral dilemmas, and these are where different ways of measuring and aggregating well-being will lead to different conclusions. It is not enough for science to make easy decisions for us, it has to be able to deal with the hard ones.<br /><br />Morality is an intuition arising from evolved emotions such as guilt or self-righteousness. It is these emotions that drive us and that account for the imperative we feel, the ‘ought’. Perhaps morality itself simply has no objective basis. Science can help to build a consensus, but it cannot find objective moral truth.</div>
</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com3tag:blogger.com,1999:blog-5801278565856116215.post-11700415652302767372013-12-13T13:46:00.000+00:002013-12-15T20:07:15.914+00:00The Universe is Made of Mathematics<div dir="ltr" style="text-align: left;" trbidi="on">
<table cellpadding="0" cellspacing="0" class="tr-caption-container" style="float: left; margin-right: 1em; text-align: left;"><tbody>
<tr><td style="text-align: center;"><a href="http://web.mit.edu/physics/images/faculty/tegmark_max.jpg" imageanchor="1" style="clear: left; margin-bottom: 1em; margin-left: auto; margin-right: auto;"><img border="0" src="http://web.mit.edu/physics/images/faculty/tegmark_max.jpg" /></a></td></tr>
<tr><td class="tr-caption" style="text-align: center;">Max Tegmark</td></tr>
</tbody></table>
A couple of years ago I was reading <a href="http://www.amazon.com/Anathem-Neal-Stephenson/dp/006147410X">Anathem by Neal Stephenson</a> and a number of ideas started to click into place for me. Without going into the novel too much (spoilers), it prompted me to think about the nature of reality and why it might be that the universe exists.<br />
<br />
Over the course of a few sleepless nights, it all came together and it seemed too make so much sense that I could entertain no doubt: the universe was what I thought of as a "Platonic algorithm".<br />
<br />
To my knowledge at the time, this idea had never before been proposed, and a bit of half-hearted searching at the time turned nothing up. I became more convinced of my idea over time and started developing an interest in philosophy so as to learn how to communicate and argue for the idea.<br />
<br />
As my research ramped up, I inevitably came across <a href="http://en.wikipedia.org/wiki/Max_Tegmark">Max Tegmark</a>'s <a href="http://en.wikipedia.org/wiki/Mathematical_universe_hypothesis">Mathematical Universe Hypothesis</a> (MUH), and with a curious mixture of disappointment and vindication realised that it was essentially the same idea.<br />
<br />
Nevertheless, Tegmark's idea was still relatively little known, so I decided I would devote some time to building up a rigorous argument with a view to perhaps writing a book on the topic some day. That is actually the single most important reason I started this blog, as many of the arguments I have outlined in my posts will serve to support the MUH.<br />
<br />
I have now learned (<a href="http://rationallyspeaking.blogspot.co.uk/2013/12/mathematical-universe-i-aint-convinced.html">courtesy of Massimo Pigliucci's blog Rationally Speaking</a>) that Tegmark is in fact about to publish<a href="http://www.amazon.co.uk/Our-Mathematical-Universe-Ultimate-Reality/dp/0307599809"> the popular science book that is so sorely needed</a>. I guess I can put aside that ambition for now.<br />
<br />
But it's also time for me to put this blog to the use it was originally intended for - enough beating around the bush!<br />
<br />
Perhaps my hesitancy to address the subject arises out of the fact that it at first seems completely mad: I am utterly convinced that the universe is made of mathematics and that the concept of physical reality is incoherent.<br />
<br />
Now let me try to explain why.<br />
<br />
<span style="font-size: large;">Why the Abstract Feels Physical</span><br />
<br />
I will freely admit that this at first seems like a meaningless platitude to say "the universe is mathematics", akin to statements such as "God is Love". Massimo Pigliucci quite rightly stresses on <a href="http://rationallyspeaking.blogspot.co.uk/2013/12/mathematical-universe-i-aint-convinced.html">his recent blog post</a> that we are not claiming that reality is best described by mathematics -- the claim is literally that the fundamental substance of reality is mathematical structure.<br />
<br />
As he pointed out, it's hard to see how this can be. How can mathematics possibly explain substance? Abstract concepts such as mathematics are surely entirely different from physical stuff, and to claim the two are the same seems to be perverse.<br />
<br />
From Pigliucci's description, Tegmark's answer to this criticism seems rather weak. He insisted that electrons are mathematical objects having mathematical properties, but apparently failed to provide a convincing reason not to regard them as physical objects having physical properties which can be described mathematically. I can see why Pigliucci feared he may have been making a category mistake.<br />
<br />
I would take another tack. I would argue that the intuition that mathematics cannot be the stuff of, well, stuff, arises from the false belief that this is a physical universe (and indeed that the physical universe is a concept that even makes sense).<br />
<br />
But before I elaborate on this, let me state plainly that I believe the argument in favour of the mathematical universe rests on three crucial premises.<br />
<br />
1) All mathematical objects exist abstractly and independently of minds (mathematical Platonism)<br />
2) The mind is a computational process (The Computational Theory of Mind or CTM)<br />
3) The universe behaves according to laws of physics which are expressible mathematically (metaphysical naturalism)<br />
<br />
I have made arguments in favour of all of these premises previously on the blog (<a href="http://disagreeableme.blogspot.co.uk/2013/10/mathematical-platonism-is-true-because.html">Platonism</a>, <a href="http://disagreeableme.blogspot.co.uk/search/label/strong%20ai">CTM</a>, <a href="http://disagreeableme.blogspot.co.uk/2012/06/super-naturalism.html">Naturalism</a>), so for now I am going to assume them to be true.<br />
<br />
How can I seriously doubt that the universe is physical? Two powerful analogies help to explain this.<br />
<br />
One hypothesis gaining currency in recent years is the idea that the universe is not physical but a computer simulation, as <a href="http://www.simulation-argument.com/">argued rather interestingly by philosopher Nick Bostrom</a>. If Bostrom is right, the physical universe may not exist and the fundamental stuff of the universe could be information, i.e. the bits flowing through a computer program.<br />
<br />
Another idea, explored in the book <a href="http://www.amazon.co.uk/Sophies-World-Novel-History-Philosophy/dp/1857992911/ref=sr_1_sc_1?s=books&ie=UTF8&qid=1386941406&sr=1-1-spell&keywords=sophie%27s+wold">Sophie's World</a> and elsewhere, is the idea that our world may not physically exist because we may be fictional characters living in a fictional world being described by some author. From the point of view of a fictional character, the world seems real, so none of us can really know for sure that we are not such a character.<br />
<br />
In both of these ideas, the universe is not real (not physically at any rate), but this fact is forever hidden from us.<br />
<br />
The MUH is quite similar in many ways. The crucial difference is that it removes the dependence on a greater reality. Unlike the computer simulation idea, we need no external hardware to support us. Unlike the characters in a novel, we are genuinely conscious. Unlike both, there is no creator, no programmer or author.<br />
<br />
We have no need of a programmer or author, no cosmic computer or reader. The bedrock of our existence is mathematical Platonism, and unlike all other explanations this is sufficient as an ultimate cause. Mathematical objects are not created and need nothing to sustain them. They exist necessarily of their own nature.<br />
<br />
Let's first consider the idea that the universe is a simulation. If the physics of the universe is computable (as it seems to be), then it could certainly be simulated by a computer of sufficient power, even if such a computer would be unfeasible to construct in this universe. It therefore follows that in principle it is entirely possible that it and we ourselves are simulated, virtual entities (although if you doubt the Computational Theory of Mind then this does not follow as we are evidently conscious).<br />
<br />
However all computer programs are mathematical structures, and as such, Platonism holds that all computer programs exist in the abstract. It follows that even if no computer could ever be built to run the simulation of the universe, this program exists as a mathematical structure, and within this structure can be found perfect descriptions of all the objects in our universe, including our minds, our thoughts, etc.<br />
<br />
But now, if we start thinking about an abstract mathematical object containing all of our thoughts and inner experiences and determining our life stories, it seems to me that we are getting close to the idea that we may all be characters in a work of fiction.<br />
<br />
Unlike fiction, what happens in our universe is determined by mathematical laws and these have the regularities necessary to support life, brains and real intelligence. I don't believe characters in fiction are actually conscious because their actions are not determined by internal genuine thought processes. There is consciousness there, but it is not in the minds of the characters but in that of the writer. In contrast, our consciousness is our own and is the product of our own brains.<br />
<br />
But like the lives of characters in a novel, our lives and our thoughts are all mapped out and there for the reading. We are implicit in the structure of this mathematical object, and in principle our stories could be discovered by simulating it on a powerful computer. This is just like the way we can explore mathematical objects such as the <a href="http://en.wikipedia.org/wiki/Mandelbrot_set">Mandelbrot set</a> with computers.<br />
<br />
We are the conscious characters in a cosmic narrative determined by no author but the laws of physics. Our lives exist even if no simulation is run, just as the stories of fictional characters continue to exist even while nobody is reading the novel and just as the Mandelbrot set has always existed, even before it was discovered.<br />
<br />
So think of our universe and our life stories as being something like an enormous fractal structure arising from some simple mathematical rules. It's a beautiful, amazing, complex, surprising thing, but it needs no creator or sustainer.<br />
<br />
<span style="font-size: large;">The Incoherence of the Physical Universe Hypothesis</span><br />
<br />
Far more than being merely a plausible account of reality, I view the Mathematical Universe Hypothesis as being necessarily true, because it reveals the incoherence of the concept of a physical universe.<br />
<br />
Given that the universe obeys mathematical physical laws (naturalism), there must be a mathematical object (given Platonism) which perfectly describes the universe and which contains within it structures analogous to all objects within the universe, including ourselves.<br />
<br />
What's more, given the computational theory of mind, those structures corresponding to our minds are necessarily conscious. As such, even if there is such thing as a real, physical universe, there must also be an isomorphic (having precisely the same form or structure) abstract non-physical universe. There is a physical you and an abstract you, and both have exactly the same experiences and neither has any way of knowing which universe they find themselves in.<br />
<br />
In fact, there is no observer anywhere who can distinguish between the physical and the abstract universes. There is nothing we can say about the physical universe that is not also true about the abstract universe, except for the fact that it is physical.<br />
<br />
So let's unpack that. What does physical mean? In everyday speech, it is used to distinguish between objects we can interact with directly (such as rocks) and objects we can only think about and discuss (such as numbers). For something to be physical it must be present at some time and place within the universe, and for something to be abstract it must exist outside of space and time.<br />
<br />
So what do we mean when we claim that a universe is physical? It exists in time and space? But it doesn't -- it contains time and space within it.<br />
<br />
Ok, so let's say that the universe is a special case, that it is physical not because it is inside spacetime but because it contains spacetime.<br />
<br />
So now let's consider a hypothetical abstract universe other than our own and ask ourselves whether it is physical. (For illustrative purposes let's say that universe is Star Wars but let's assume that the universe is not a work of fiction but a mathematical object in much the same way that I'm claiming that our universe is a mathematical object).<br />
<br />
This universe is not present within the spacetime of our universe, so from that point of view it is not physical. It does contain its own spacetime, so by our broader definition that would make it physical.<br />
<br />
But it's just a made-up universe! Our instinct is that it ought not to be considered physical. So let's say that only universes which contain the spacetime of our own universe are physical.<br />
<br />
So now, only our own universe can be physical, by definition. This seems to be a rather unsatisfactory result, because there are very good reasons (such as the anthropic principle) to believe there may be other universes. We want that option to be open to us, if only so we can discuss the possibility.<br />
<br />
It's also a patently subjective definition of physical. From the point of view of an observer in this universe, the Star Wars universe is not physical. But from the point of view of Han Solo, our universe is not real.<br />
<br />
It seems to me that the only way out of this mess is to realise that the application of the concept of physicality to a universe is a category mistake. Physicality as a concept only makes sense within the context of a given universe.<br />
<br />
For example, you would no doubt regard yourself as physical but Luke Skywalker as abstract. However, Luke Skywalker is physical to Han Solo, while Han would consider you to be non-existent. There is no objective, universe-agnostic way to say that you are really physical but Luke Skywalker is not.<br />
<br />
So given that physicality as applied to universes seems to be incoherent, and given that physicality is the only (completely undetectable) property that distinguishes the mathematical universe from the physical universe, it seems to me that the only sensible conclusion is that only the mathematical version of our universe exists. This accounts for the existence of the universe, fine-tuning and everything we observe.<br />
<br />
The Physical Universe Hypothesis is therefore unnecessary, redundant and incoherent.<br />
<br />
<span style="font-size: large;">Reasons to Believe</span><br />
<br />
I have explained why I think the Mathematical Universe Hypothesis follows necessarily from naturalism, Platonism and the computational theory of mind, however there are plenty of people who are skeptical of some or all of these propositions. Nevertheless, I think there are independent reasons to find the MUH plausible.<br />
<br />
Firstly, and perhaps most importantly, it explains why the universe exists. It does not tell us what caused the Big Bang, or if the Big Bang had a cause at all, but it does explain why there is something rather than nothing, without appeal to a creator or any other unsatisfying ultimate cause. This echoes <a href="http://disagreeableme.blogspot.co.uk/2012/04/beginnings-and-beginnings.html#more">my rebuttal</a> to the Kalam Cosmological argument.<br />
<br />
For a creator God, we are left to ask who created the creator - but if the universe is a mathematical object, it needs no creator (on Platonism at least), so this is a very satisfying answer to that eternal question. It has always existed and will always exist outside of space-time as a mathematical construct.<br />
<br />
It also provides a powerful explanation for fine-tuning. <a href="http://disagreeableme.blogspot.co.uk/search/label/anthropic">As discussed previously</a>, this universe seems to be perfectly calibrated to support life. Much attention in discussions of fine tuning is focused on the physical constants such as the charge of the electron or the speed of light, etc, but very rarely is it asked why are the equations and the laws of physics themselves of the form they are. Why could the universe not be completely different, not just having different constants but having completely unrecognisable physical laws?<br />
<br />
If all possible universes exist, then we have our answer. Our universe is fine-tuned because it is one which has the ability to support conscious thought selected from an infinite multitude of mathematical structures, most of which are lifeless.<br />
<br />
Better than other multiverse hypotheses, where it is proposed that there might be a great number of universes, the MUH posits that all possible universes exist, not merely a great many. This is actually simpler because, as Tegmark says (and <a href="http://disagreeableme.blogspot.co.uk/2012/05/part-is-more-complex-than-whole.html">as explained previously on this blog</a>), there are no free parameters. We have no reason to wonder why universe X exists but universe Y does not. If all universes exist, nothing is arbitrary. We have an ultimate explanation of everything.<br />
<br />
There's something attractive to me about the idea that all universes exist. After all, what's to stop any given possible universe from existing? It is not subject to the laws of other universes. No law of this universe can prevent another entirely causally disconnected universe from existing. Even if there is a multiverse with its own meta-laws (e.g. the String Theory multiverse), what's to stop another multiverse with different meta-laws from existing? Even without asserting that the universe is made of mathematics, it seems to me to be perfectly sensible to propose that all possible universes exist. Why not?<br />
<br />
Finally, the idea that the universe is literally a mathematical object explains what physicist Eugene Wigner called "<a href="http://en.wikipedia.org/wiki/The_Unreasonable_Effectiveness_of_Mathematics_in_the_Natural_Sciences">the unreasonable effectiveness of mathematics</a>" in modelling and predicting the natural world, often leading physicists to empirical discoveries they could not otherwise have made. If the universe really is a mathematical object, it is hardly surprising that mathematics should be effective in describing it.<br />
<br />
There are objections to the Mathematical Universe Hypothesis of course, and I intend to return to them in time.</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com29tag:blogger.com,1999:blog-5801278565856116215.post-81448254436997343082013-10-02T15:04:00.000+01:002013-10-02T17:01:36.888+01:00Mathematical Platonism Is True Because it is Useful<div dir="ltr" style="text-align: left;" trbidi="on">
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Mandel_zoom_00_mandelbrot_set.jpg/640px-Mandel_zoom_00_mandelbrot_set.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="240" src="http://upload.wikimedia.org/wikipedia/commons/thumb/2/21/Mandel_zoom_00_mandelbrot_set.jpg/640px-Mandel_zoom_00_mandelbrot_set.jpg" width="320" /></a></div>
<br />
I have briefly discussed mathematical Platonism in <a href="http://disagreeableme.blogspot.co.uk/2013/09/dualism-is-not-dirty-word.html">previous</a> <a href="http://disagreeableme.blogspot.co.uk/2012/05/existence-and-existence.html">articles</a>. This post is going to contain my current thoughts about it.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
Mathematical Platonism is distinct from classic Platonism (so nothing about ideal forms, for instance) and holds that three propositions are true.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
1. Mathematical objects exist.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
2. Mathematical objects are independent of human beings.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
3. Mathematical objects are abstract</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
Or, in condensed form, mathematical objects exist abstractly and independently of human beings. All possible mathematical objects exist, have always existed and will always exist, even if no mathematician ever ponders them.<br />
<br />
In this post, I will argue that while there is no fact of the matter regarding whether mathematical objects exist, it is sensible and useful to treat them as if they do, and that this is enough to justify mathematical Platonism.<br />
<br />
<a name='more'></a></div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif;">
<span style="font-size: large;">Mathematical Platonism is not Mystical</span><br />
<div style="font-size: 13px;">
<br /></div>
</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
To me, mathematical Platonism seems correct as long as we don't confuse the notion of abstract existence with physical existence. As discussed <a href="http://disagreeableme.blogspot.co.uk/2012/05/existence-and-existence.html">previously</a>, there can be more than one kind of existence.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
In particular, nobody is saying that mathematical objects are floating around in some ghostly mathematical realm of existence (even though I believe this is how Plato visualised it). Instead, my view is that it's a completely different category of existence, and if you disagree with me then you simply have a narrower concept of what the term "existence" means. </div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
For example, it just seems to make sense to assume that when I say "There exists no rational square root of 3", the verb "exists" is meaningful and used appropriately. It is in this sense alone that mathematical objects exist. There really is nothing mystical about it, as much as some people perceive it this way.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif;">
<div style="font-size: 13px;">
<br /></div>
<div style="font-size: 13px;">
It's not hard to see why this misconception about mathematical Platonism is so pervasive. The original motivation for positing the existence of mathematical objects was to explain by analogy to physical objects how mathematicians could perceive mathematical truths. It was supposed that mathematical objects were real, and could be perceived directly by the mind of mathematicians with some special kind of sense much as we see tables and chairs with our vision.</div>
<div style="font-size: 13px;">
<br /></div>
<div style="font-size: 13px;">
There were also ideas that we had originally come into the world from the realm of mathematics, and our perception of mathematical objects was some sort of memory of the things we had seen there before we had been born.</div>
<div style="font-size: 13px;">
<br /></div>
<div style="font-size: 13px;">
I don't think either of these ways of thinking about it make much sense, and this is not at all what I am proposing. In many ways it's unfortunate that the terminology of mathematical realism has become so inextricably bound to the philosophy of Plato, as much of his metaphysics don't really conform to the views of most modern mathematical Platonists.</div>
<div style="font-size: 13px;">
<br /></div>
<span style="font-size: large;">No Fact of the Matter</span><br />
<div style="font-size: 13px;">
<br /></div>
</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
Before I go any further, I would like to make clear that I don't actually think there is a single objectively correct answer to the question of whether mathematical objects exist. The idea of existence is dubious when applied to cases like this. It is at its most coherent when applied to physical objects in the universe such as the chair I am sitting on as I type. It seems without question that this chair exists. I can see it and touch it, and so could you if you were beside me. It is in contexts such as this that the concept of existence first arose.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif;">
<div style="font-size: 13px;">
<br /></div>
</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
There are many other kinds of entities which are not physical objects. Your mental experience, composed of thoughts, hopes and dreams is a good example. Though there may be physical analogues of these (e.g. the firing of neurons), it seems odd to suppose that the firing of the neurons is the same thing as the feelings they excite.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
And yet few of us would deny that our minds or consciousness experiences are real. If this is so, it must be that these phenomena exist in some sense, although not in the same way that a chair does, as they are only accessible to ourselves.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
Another kind of existence might be that of very high level emergent phenomena, such as nation states, languages, currencies, etc. Most of us feel that these things exist in a sense, even if they are only socially constructed. Multiple observers might agree that they exist, and yet nobody can perceive them directly. As such, it seems to me that whatever kind of existence they may have, it is not physical.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
Examples abound in nature too. Photosynthesis is a process, not a physical thing. The idea of a species is a fuzzy high level descriptor which we apply to many different individuals. Genes are not single objects but patterns we find repeated in the cells of related organisms. All of these things have a basis in the physical, but they are not tangible objects, so they do not exist in the same way as a chair.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
The idea of physical existence also breaks down when considering universes. Universes are not physical objects, they are the containers of physical objects. For a physical object to exist, it means that it must be present within the universe -- observers within that universe must be able perceive it or its effects to exist with their senses. The question of whether other universes exist is meaningless if we think of them as physical objects, because from our perspective they do not exist, as they are entirely causally disconnected from us. On the other hand, our universe doesn't physically exist from the point of view of an observer in another hypothetical universe.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
And yet, most of us believe that the universe exists!</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
<br /></div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
So we ought to be open to the idea that there are things other than the physical, including mathematical objects. Whether the concept of existence is applicable in all cases is a matter for only for linguistic definition. Clearly, each of these objects, processes and properties has an existence in <i>some</i> sense, even if not in the same way that a chair does. There is no fact of the matter - whether this pseudo-existence is real or not depends only on how one defines existence. Definitions ought to be judged on how well they capture intuitive concepts, on how consistent they are and on how useful they are for communication, and it is on these grounds that we should decide whether mathematical objects exist or not.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif;">
<div style="font-size: 13px;">
<br /></div>
<div style="font-size: 13px;">
For me, there are two particularly striking arguments that convince me that mathematical Platonism is a useful and consistent way to think about mathematical objects.</div>
<div style="font-size: 13px;">
<br /></div>
<span style="font-size: large;">The Argument from Exploration</span><br />
<div style="font-size: 13px;">
<br /></div>
</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
Entities such as the <a href="http://en.wikipedia.org/wiki/Mandelbrot_set">Mandelbrot set</a> and other such visually striking mathematical objects are particularly good demonstrations of the intuition that mathematical concepts exist. The Mandelbrot set is defined by deceptively simple mathematical rules. I imagine that Benoit Mandelbrot, the mathematician who first stumbled upon these rules, found himself surprised by the complexity and beauty he saw unfolding before him as he explored the consequences of his simple premises (and even if he were more sanguine, I can certainly imagine being pretty excited if I were in his place).<br />
<br />
We see the same kind of thing in <a href="http://en.wikipedia.org/wiki/Conway's_Game_of_Life">Conway's Game of Life</a>, a simple mathematical system which turned out to have exciting and profound consequences and phenomena to be explored. Many hobbyists and mathematicians are still exploring this and other systems like it. Mathematician Stephen Wolfram (creator of the well-known Mathematica mathematics package) published a book in 2002, <a href="http://en.wikipedia.org/wiki/A_New_Kind_of_Science">A New Kind of Science</a>, all about exploring the kind of complex and surprising phenomena we can find emerging from even simpler such systems.<br />
<br />
If the "inventor" can be genuinely surprised by his own "invention", then it seems to me that he is discovering truths about it that he did not put there himself. In my view, this can only be consistent with the view that this really is exploration of something that exists independently.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif;">
<div style="font-size: 13px;">
<br /></div>
<span style="font-size: large;">The Argument from Independent Discovery</span><br />
<div style="font-size: 13px;">
<br /></div>
</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
I also like to bring up the idea of independent discovery by separate people of the same mathematical object, such as the discovery by Newton and Leibniz of the calculus. The controversy which arose over which of them had first discovered it is only sensible if we view it as discovery. If a mathematician is like a cabinet maker, creating something new whenever he invents a new mathematical tool, there would be no issue. Newton would have his cabinet and Leibniz would have his. The fact is that they both discovered the same thing, independently, like two explorers happening upon the same continent. This to me demonstrates that mathematical objects are independent of mathematicians. It also suggests that they exist, as otherwise there would be nothing for the two to discover independently, and so nothing to argue about.</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif;">
<div style="font-size: 13px;">
<br /></div>
<span style="font-size: large;">It's all about Intuition</span><br />
<div style="font-size: 13px;">
<br /></div>
<div style="font-size: 13px;">
Now, admittedly, these two arguments I have sketched out are essentially arguments from intuition, and intuition can be very fallible indeed. I think this is not a problem in this case precisely because I maintain that there is no fact of the matter. "Existence" in its general sense is simply not a robust, objective concept we can all agree on, like mass or velocity. Instead, it's just a label for a human intuition. As such, we ought to apply it in cases which make the most intuitive sense, as long as we're not contradicting ourselves or the evidence before us.</div>
<div style="font-size: 13px;">
<br /></div>
<div style="font-size: 13px;">
The arguments from intuition I have presented I justify the utility and coherence of thinking of mathematical objects as existing.</div>
<div style="font-size: 13px;">
<br /></div>
<span style="font-size: large;">The Significance of Mathematical Platonism</span><br />
<div style="font-size: 13px;">
<br /></div>
</div>
<div style="background-color: white; color: #222222; font-family: arial, sans-serif; font-size: 13px;">
Since I think ultimately any disagreement must boil down to differences of interpretation of the word "existence", this is arguably a question only of semantics. As such, it may appear to be completely inconsequential and so irrelevant to any serious issues, but I feel it actually has profound implications for how we explain certain other phenomena we perceive as real.<br />
<br />
The human mind, for instance, does not appear to be a physical object in itself, but is instead some sort of process which takes place in a human brain. If the computational theory of mind is true, then the human mind is a computational process, which is a kind of mathematical object. Whether the computational theory of mind is right or wrong, it is at least plausible enough to convince a lot of people, including me. If you accept that it is at least plausible, then what would it mean for the existence of the mind if it were true?<br />
<br />
Rejecting mathematical Platonism would mean that it is plausible that the human mind does not exist, or that it is an illusion or fiction. If we instead allow ourselves the mental tool of mathematical Platonism, then we get to accept both the computational theory of mind and that the mind exists. Whichever approach you choose should depend on which definition of existence leads to the most intuitive conclusions, and I think most of us prefer to think that our minds are real (even if not in the same sense as tables or chairs).<br />
<br />
I think the same is true of the universe. As I explained before, the concept of physical existence doesn't really make sense when applied to universes. We can't really say that our universe objectively exists because we all have the special, subjective viewpoints of observers within it. What about other hypothetical universes? What about the fictional universe of Star Trek, for instance? It does indeed exist from the point of view of Captain Kirk, and there is no way he could tell that he or his universe do not really exist. It seems to me that we have no robust argument to show that our own universe is any more real than that of Star Trek's.<br />
<br />
One relatively popular instance of this kind of view is Nick Bostrom's <a href="http://www.simulation-argument.com/simulation.html">simulation argument</a>, where he proposes that we could all be living in a computer simulation. Computer simulations are of course computational processes, which are a special kind of mathematical object.<br />
<br />
If we imagine for a moment that this is in fact the case, then a denial of mathematical Platonism would amount to a denial that the universe exists, while allowing ourselves the mental tool of mathematical Platonism allows us to consider this possibility while maintaining that the universe would actually exist even if it were in fact a computer simulation. Again, whether we accept or reject mathematical Platonism should therefore be predicated on which definition of existence allows for the conclusion that makes the most intuitive sense.<br />
<br />
So if you reject mathematical Platonism, that's fine. But if you do, and in particular if you are willing to assign a non-zero probability that a computer could have a mind or that the universe could be a simulation, then you ought to be ready to entertain the idea that you, your mind and the universe around you may not actually exist in any sense.<br />
<br />
I prefer mathematical Platonism.</div>
</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com21tag:blogger.com,1999:blog-5801278565856116215.post-81908378799316651792013-09-10T19:49:00.001+01:002013-09-10T23:15:42.358+01:00Dualism is Not a Dirty Word<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="http://momath.org/wp-content/uploads/spiralsBlack.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="http://momath.org/wp-content/uploads/spiralsBlack.jpg" width="320" /></a></div>
<div>
<br /></div>
There are two major flavours of naturalism - methodological naturalism and metaphysical or philosophical naturalism. The former essentially means adopting the attitude that the latter is true as a pragmatic step without actually committing to the view that it is literally so. As such, only metaphysical naturalism is really in need of clarification.<br />
<br />
<div>
I have already explained on this blog that I am a metaphysical naturalist, and what I think this entails, in particular the <a href="http://disagreeableme.blogspot.co.uk/2012/06/super-naturalism.html">distinction between the supernatural and the natural</a>. All too often these are unhelpfully defined simply as antonyms of each other.</div>
<div>
<br /></div>
<div>
To recap briefly on that earlier article, and I think to clarify and expand it in the light of recent thinking on the subject, I propose the following definition:<br />
<blockquote class="tr_bq">
<i>Naturalism is the view that the physical universe is fundamentally intelligible.</i></blockquote>
Let's extend this to make a number of other statements which I regard as equivalent or derivable from that basic definition.<br />
<br />
(For the purposes of this discussion, a physical phenomenon or effect is an object, event or series of events which could in principle be objectively detected or verified by independent observers.)<br />
<br />
In the naturalist view:<br />
<ul style="text-align: left;">
<li>All physical effects have physical causes</li>
<li>All high-level physical phenomena can in principle be described in terms of lower-level physical phenomena</li>
<li>The most fundamental, basic level of physical phenomena corresponds to the fundamental laws of physics, which are expressible in terms of mathematics</li>
</ul>
<div>
In my view, this definition rules out libertarian free will, for example, because the decisions of people have physical effects yet libertarians reject any physical, deterministic account of human decision-making.</div>
<div>
<br /></div>
<div>
Most naturalists seem to be physicalists, believing only physical stuff such as matter and energy to exist, and many even think that physicalism is identical to naturalism. It isn't though, and this distrust of the non-physical is partly responsible for an unwarranted contempt for the alternative view - dualism.</div>
</div>
<div>
<br /></div>
<div>
<a name='more'></a>When most people think of <a href="http://en.wikipedia.org/wiki/Dualism_(philosophy_of_mind)">dualism</a>, they think of <a href="http://en.wikipedia.org/wiki/Dualism_(philosophy_of_mind)#Descartes_and_his_disciples">Cartesian dualism</a>, where some mysterious mind-stuff is supposed to account for human free will, conscious experience and intuition. In this view, we are more than mere biological machines, there is some spark of life, a spirit, that somehow interacts with the brain to give us our minds.</div>
<div>
<br /></div>
<div>
This spirit is not physical and cannot be reduced to physical laws. It is therefore supernatural by my definition. Naturalists quite rightly reject the idea of such a spirit, and so reject dualism.</div>
<div>
<br /></div>
<div>
But perhaps they are throwing the baby out with the bathwater, for there are other (and in my view correct) forms of dualism which are entirely compatible with naturalism.</div>
<div>
<br /></div>
<div>
Mathematical Platonism for example.</div>
<div>
<br /></div>
<div>
(The term "dualism" is usually used in philosophy to refer to views on the mind/body problem, so initially my bringing up mathematical objects may appear wrong-headed. Bear with me!)</div>
<div>
<br /></div>
<div>
Mathematical Platonism is the view that mathematical objects really exist, are not physical, and are independent of any mathematician or indeed any physical objects. I subscribe to mathematical Platonism for reasons alluded to in <a href="http://disagreeableme.blogspot.co.uk/2012/05/existence-and-existence.html">an earlier article</a>, but I plan to expand upon that later.</div>
<div>
<br /></div>
<div>
For now, I will argue that whether it is true or not, it is compatible with naturalism because these mathematical objects have no direct causal powers in the domain of the physical - every physical event or object that can be explained in terms of mathematical objects can also be reduced to explanations according to the laws of physics.</div>
<div>
<br /></div>
<div>
So <a href="http://en.wikipedia.org/wiki/Fibonacci_number">Fibonacci's numbers</a> may give us a useful way to explain the <a href="http://momath.org/home/fibonacci-numbers-of-sunflower-seed-spirals/">pattern of spirals</a> in a sunflower head (pictured at the top of this post), but whatever the mathematical properties of Fibonacci's numbers that give rise to this pattern, the Fibonacci numbers themselves are not directly causing anything. The pattern of seed placement can also be understood as a result of entirely ordinary physical processes. The mathematical models we use to explain physical processes do not therefore cause the processes to happen, but instead provide useful ways to describe and model the high level outcomes of natural processes.</div>
<div>
<br /></div>
<div>
For many, this defeats the motivation for mathematical Platonism in the first place. Mathematical Platonism is viewed by some as a way to explain how we can perceive mathematical truth. For this to work as an explanation then it would have to be possible for mathematical objects to directly affect physical objects - mathematical concepts would need to be able to cause changes in our brains directly. This view of Platonism is inconsistent with naturalism and so I reject it.</div>
<div>
<br /></div>
<div>
For me, mathematical Platonism is more a way of thinking about things, and a way of resolving certain confusing puzzles. I remain a naturalist - everything that happens in the universe can be understood purely in terms of physics.</div>
<div>
<br /></div>
<div>
All well and good, but what's the point of this rant?</div>
<div>
<br /></div>
<div>
It is in the context of the eternal debates in philosophy of mind that these issues become important, and in particular when trying to explain phenomenal consciousness. On a previous post, I explained that my preferred <a href="http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-chinese-room.html">response to John Searle's Chinese Room</a> thought experiment was the Virtual Minds response. On explaining this to some naturalists, I am often told that it sounds "dangerously like dualism".</div>
<div>
<br /></div>
<div>
Well, it is dualism, but that should not rule it out. Any dualism that is compatible with naturalism (and this is such a dualism), ought to be considered on its own merits and not simply dismissed out of hand.</div>
<div>
<br /></div>
<div>
The instinct most people have that their conscious minds really exist is in my view at odds with the idea that only physical objects exist, because consciousness has no empirically detectable signifiers. Even if there were such a signifier, there is <a href="http://disagreeableme.blogspot.co.uk/2013/01/strong-ai-naturalism-and-ai.html">in principle</a> no way we could know that this signifier proved or was required for consciousness.</div>
<div>
<br /></div>
<div>
Everything that a human mind can do could in principle be done by something like the Chinese Room or any other such computing device capable of passing the Turing Test. While I believe that any such device would be conscious, physicalists usually disagree since there is no way to test this and in any case all it is doing is shuffling bits around. Unfortunately, much the same is true for their own brains, so I don't think they have a leg to stand on.</div>
<div>
<br /></div>
<div>
In my view, phenomenal consciousness is empirically undetectable, and will remain empirically undetectable (at least directly), because it is not a physical phenomenon. Instead it is fundamentally subjective and so available only to the conscious entity.</div>
<div>
<br /></div>
<div>
But doesn't consciousness have causal powers? When we say we are conscious, this constitutes a physical event, and surely our consciousness must have a causal role in this.</div>
<div>
<br /></div>
<div>
Well, yes and no. Yes, in that we would not make such proclamations were we not conscious. No, in that everything we do can also in principle be explained in terms of physical events, just like the behaviour of a Turing Test passing machine as it proclaims it is conscious. In my view, our consciousness is what it feels like to <i>be</i> the sophisticated, self-aware mathematical structures that describe those physical processes. </div>
<div>
<br /></div>
<div>
We are not physical, we are virtual minds existing on a physical substrate.</div>
<div>
<br /></div>
<div>
By analogy: I am the Fibonacci numbers; my brain is the sunflower.</div>
</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com0tag:blogger.com,1999:blog-5801278565856116215.post-81097118485171042122013-08-31T17:17:00.000+01:002013-08-31T18:02:24.852+01:00Free Will and Punishment<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<object class="BLOGGER-youtube-video" classid="clsid:D27CDB6E-AE6D-11cf-96B8-444553540000" codebase="http://download.macromedia.com/pub/shockwave/cabs/flash/swflash.cab#version=6,0,40,0" data-thumbnail-src="http://img.youtube.com/vi/_FanhvXO9Pk/0.jpg" height="266" width="320"><param name="movie" value="http://youtube.googleapis.com/v/_FanhvXO9Pk&source=uds" /><param name="bgcolor" value="#FFFFFF" /><param name="allowFullScreen" value="true" /><embed width="320" height="266" src="http://youtube.googleapis.com/v/_FanhvXO9Pk&source=uds" type="application/x-shockwave-flash" allowfullscreen="true"></embed></object></div>
<br />
I have a lot of time for both Sam Harris and Dan Dennett. I find I often disagree with Harris, but his arguments are usually interesting and well-considered. The disagreements I have with him are often subtle, as in the case of The Moral Landscape, where I think his moral framework is perfectly valid and useful but not the only objectively correct one.<br />
<br />
I seem to agree with Dan Dennett most of the time. I like his thinking on consciousness, religion etc, and I admire his gentle, humorous and thoughtful way of expressing himself.<br />
<br />
However, Harris and Dennett find themselves in disagreement on the subject of free will, and I have to side with Harris on this one. I think the disagreement is an important one when considering questions of moral responsibility.<br />
<br />
<a name='more'></a>I think Harris expresses his and my position very well on his <a href="http://www.samharris.org/blog/item/free-will-and-free-will">blog post</a> discussing his points of disagreement with Dennett. Metaphysically speaking, there's pretty much complete agreement. Both men agree that we are essentially machines, and that our decisions are causally determined by our brain structures, stimuli, etc.<br />
<br />
In my view and Harris's, there is some sense in which we could not have chosen other than how we did. Though today I may agonise over which choice to make, the decision I finally make tomorrow is already set in stone, I just don't see it yet. This position is known as determinism.<br />
<br />
The traditional alternative view is libertarianism, which holds that we are always free to make alternative decisions and so our decisions are not causally determined by the laws of physics. As such, this view of free will is not usually deemed to be compatible with scientific naturalism and its defenders are usually religious.<br />
<br />
(As an aside, I should briefly note that my simple illustration of determinism is not literally true, as quantum mechanics means that the randomness inherent in reality may be amplified through chaotic effects to make our decisions a little more unpredictable. This can probably be safely ignored in discussing the deterministic vision of free will Harris and I share, because this randomness is difficult to construe as being compatible with libertarian free will. In my opinion, it aids clarity if we pretend that nature is as deterministic as clockwork.)<br />
<br />
Dennett's view is entirely metaphysically compatible with determinism, as far as I can tell. I don't think he would disagree with anything I've just said. The point of contention is that he thinks determinism is entirely compatible with free will, and so his position is known as compatibilism.<br />
<br />
This initially seemed pretty surprising to me. In a way it still does. Free will as commonly understood is entirely in opposition to determinism, so I found it hard to see what purpose could be served by merely changing the definition of free will to describe deterministic decision making. Dennett defends this view by arguing that we are free in the sense that no external agency is controlling our actions. If we have puppet strings, they are to be found within our brains, as part of us. As such, we do genuinely choose our own actions, and so we can be said to have free will.<br />
<br />
So far I don't have a major problem with this. It's just a redefinition of terms to describe something subtly different, what Dennett calls free will "worth wanting". This is not unusual for philosophical disagreements, which I think often boil down to semantic differences on the interpretation of terms.<br />
<br />
The difference comes in what these different definitions imply, and perhaps the most important consequence of these disagreements is our interpretation of moral responsibility.<br />
<br />
If there is such a thing as free will, then we have justification for punishing the perpetrators of crimes. We hold the criminal to be morally responsible, and so we feel entitled to do what is considered immoral in normal circumstances - the deliberate infliction of suffering upon a conscious human being.<br />
<br />
If there is no free will, however, then the criminal could not have acted otherwise - their action was already predetermined by their biology and biography. There is no fundamental distinction between the criminally insane and the criminal, so there should be no difference in treatment. Any punishment we might inflict must therefore be justifiable on utilitarian grounds - raising the overall well-being of society, with equal consideration given to the guilty and the innocent in the moral calculus.<br />
<br />
We do, very strongly, feel an antipathy for those who have wronged us or others. We all root for the bad guy to meet a messy end in our fiction. If there is no free will, then this impulse to vengeance is morally unjustifiable, yet since we feel it so strongly there is a felt need to find a justification. I think Dennett's compatibilist views are in part motivated by this desire to retain the traditional, intuitive notion of moral responsibility.<br />
<br />
Of course, denying free will does not mean that punishment is morally impermissible. It just needs to be justifiable on utilitarian grounds, and I think that can be achieved.<br />
<br />
Five different reasons to punish offenders occur to me. I think it might be useful to break those reasons down.<br />
<ol style="text-align: left;">
<li>Deterrence of others from committing the same crime</li>
<li>Rehabilitation of the offender</li>
<li>Prevention of the offender from committing future crimes (e.g. imprisoning a violent offender to protect the public)</li>
<li>Comforting those affected by the crime</li>
<li>Retribution</li>
</ol>
<div>
I have ranked these in decreasing order of defensibility or relevance, as I see them.</div>
<div>
<br /></div>
<div>
For me, deterrence is clearly the most important reason to punish offenders. It is also the one ground on which I can see a case being made for being lenient to those affected by mental disorders. If it can be shown that a mentally unhealthy offender is not deterred by fear of punishment, then perhaps there is no point in seeking to deter such people. Of course, this is a tricky argument to apply in practice, because it is generally impossible to know whether an individual has the capacity to be deterred, and not seeking punishment for crimes committed by the mentally unhealthy may contribute to a culture where even sane people are less deterred.</div>
<div>
<br /></div>
<div>
Rehabilitation of the offender is also a good reason to punish offenders, but only if it can be shown that the punishment works. Too often prison inmates repeat offend. On the other hand, drivers caught speeding once may learn to be more careful in future. Only analysis of empirical data is going to tell us whether a given punishment is likely to be defensible from a rehabilitation point of view.</div>
<div>
<br /></div>
<div>
Prevention of future crimes is also defensible, but it only applies in a minority of cases of moral transgression. Examples might include firing an employee, locking up a murderer, or revoking a driving license. In cases where these punishments are time limited, the benefits are limited accordingly.</div>
<div>
<br /></div>
<div>
I'm far less sympathetic to the final two arguments, chiefly because of my stance on free will. I think the impulse to seek retribution is an immoral holdover from our evolutionary history, and I think the world would be a better place if we could excise it from our collective psyche. Any comfort the victim might get from punishing the perpetrator arises from indulging this immoral impulse.</div>
<div>
<br /></div>
<div>
And yet, it cannot be denied that some victims will feel better if their aggressor is punished, and I do think this is worthy of consideration. Most people would probably think in terms of whether the victim deserved to be consoled, or whether the criminal deserves to be punished, but I cannot find a basis for thinking in these terms if libertarian free will does not exist. Instead, I think it is more defensible to think in terms of whether the consolation of the victim is greater than the suffering of the convict. I think that in most cases this will not be true.<br />
<br />
It's for reasons like these that I oppose the death penalty. It seems not to be an effective deterrent and it has no opportunity to rehabilitate the offender. It does very effectively prevent the offender from committing crimes in future, but life imprisonment would achieve the same ends. Any comfort to the victims of the crime is outweighed by the cruelty of state-sanctioned murder. Finally, every execution feeds and justifies the retributive instinct and devalues human life in the process.<br />
<br />
And so, as much as I like Dan Dennett, I have to disagree with him on compatibilism. It's best to deny free will altogether rather than try to find a place for it within a naturalistic worldview just so we can justify the torture of criminals.</div>
</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com5tag:blogger.com,1999:blog-5801278565856116215.post-62364222489487750002013-08-23T19:23:00.001+01:002013-08-23T19:24:28.820+01:00Atheism is not Communism<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<div class="separator" style="clear: both; text-align: center;">
<a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJNf36mfjip7E5cB_ml0R6SgIzq5PDlPyLY0dpPTrn6Am_uGLMRjr2lhkIj25BTKk7S3Wa0lIIo3KZNPidRWAd8MqllRhtdRco-KCwKVHoKgHSAQ5E1Fn4fJATgDDzThky89paNjEyZPI/s1600/tchg-pix.nfo-o-218.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEiJNf36mfjip7E5cB_ml0R6SgIzq5PDlPyLY0dpPTrn6Am_uGLMRjr2lhkIj25BTKk7S3Wa0lIIo3KZNPidRWAd8MqllRhtdRco-KCwKVHoKgHSAQ5E1Fn4fJATgDDzThky89paNjEyZPI/s320/tchg-pix.nfo-o-218.jpg" width="244" /></a><a href="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMfm63VsTabV1YFb9igtDVoR5Bq1QdXYYTM_2ONb18ATFIX-pPbGXELPGX8CR4DN_vTmmw-EL06IeRkTjiVyyAHxLrz0ok30D0-TKfBdAjamcJQVhIST3iSD1vu87OfLHPQ-Xhg0HYY3g/s1600/stalin_poster.jpg" imageanchor="1" style="margin-left: 1em; margin-right: 1em;"><img border="0" height="320" src="https://blogger.googleusercontent.com/img/b/R29vZ2xl/AVvXsEgMfm63VsTabV1YFb9igtDVoR5Bq1QdXYYTM_2ONb18ATFIX-pPbGXELPGX8CR4DN_vTmmw-EL06IeRkTjiVyyAHxLrz0ok30D0-TKfBdAjamcJQVhIST3iSD1vu87OfLHPQ-Xhg0HYY3g/s320/stalin_poster.jpg" width="219" /></a></div>
<div class="separator" style="clear: both; text-align: center;">
<br /></div>
A common argument against atheism is the claim that the deaths of millions in communist China and Russia during the Mao and Stalin eras show what happens when atheism is embraced by governments. The implication is that atheism and atheists are immoral, and that the only thing preventing society from disintegrating into an inhuman authoritarian hell is the morality emanating from the religious in society.<br />
<br />
Some even claim that that's where we would be headed if the likes of Richard Dawkins, Sam Harris or Christopher Hitchens ever got their hands on the reins of power.<br />
<br />
I couldn't disagree more.<br />
<a name='more'></a><br />
Communism in Russia and China was not an experiment with the wholesale introduction of atheism, as an interlocutor of mine recently argued. Rather, it was an experiment with the wholesale introduction of authoritarian, totalitarian, nationalist dogmas with the state and certain statesmen in place of God.<br />
<br />
The values promoted by the New Atheists (myself included) include skepticism, democracy, humanism and tolerance of dissenting views. Communism as practised in the USSR, China and North Korea has almost nothing in common with these values.<br />
<br />
Hitchens in particular illustrated this point very clearly with his famous comparison of an afterlife in Heaven to an eternity in North Korea, bound forever in worshipful servitude to a single authority figure.<br />
<br />
While communist atheism is an article of dogmatic faith, for me and the New Atheists it arises from skepticism. It's not a premise, it's a conclusion. As such, authoritarian communism should certainly not be construed as synonymous with atheism even if it does have atheism as one aspect of dogma.<br />
<br />
And dogma it is, every bit as rigid and uncompromising as the dogma of religions. This is why it seems to me that these forms of communism have much more in common with religions than the secular humanist atheism I advocate. The faith in the party, the personality cults, the "holy" books. These are direct counterparts to the trappings of religion. As such, authoritarian communism is effectively a religion even if it is not literally so. It uses the same tricks to indoctrinate and exploit the people. It appeals to the same bugs in human psychology.<br />
<br />
The only difference is that supernatural beliefs are not involved. That's it. Belief in the supernatural is actually pretty low on the list of my issues with religion.<br />
<br />
It's not just the pseudo-religious overtones of communism that are problematic. You need to consider the personalities involved. Stalin and Mao were opportunistic megalomaniac psychopaths. A large part of the damage done to their countries can be attributed to their callous disregard for the welfare of their people in their quest for power and control. Unless you believe that atheists have a high tendency towards psychopathy, there is no reason to think that an atheist regime is any more likely to be cruel than a religious one.<br />
<br />
The fact is that atheism or theism has almost nothing to do with whether one is a good person or not. Whether the government endorses belief in God or not has little to no influence on the personal morality of its citizens. What's much more important is whether the society encourages free speech, skepticism and the democratic election of leaders.<br />
<br />
So please, let's have no more trashing of modern humanist atheism by comparison to authoritarian communism. They are not the same thing.</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com226tag:blogger.com,1999:blog-5801278565856116215.post-75274669784358112462013-08-11T20:37:00.000+01:002013-08-11T20:39:00.348+01:00Consciousness is not like Photosynthesis<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator tr_bq" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/commons/4/49/Plagiomnium_affine_laminazellen.jpeg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="240" src="http://upload.wikimedia.org/wikipedia/commons/4/49/Plagiomnium_affine_laminazellen.jpeg" width="320" /></a></div>
I've been posting some comments on <a href="http://rationallyspeaking.blogspot.co.uk/">Massimo Pigliucci's blog</a> recently in defense of Strong AI, after a couple of posts attacking the computational theory of mind.<br />
<br />
In the <a href="http://rationallyspeaking.blogspot.co.uk/2013/07/philosophy-not-in-business-of-producing.html">first</a>, he uses John Searle's Chinese Room argument to demonstrate that a computer cannot be conscious. If you've been paying attention to this blog, you should know <a href="http://disagreeableme.blogspot.co.uk/2012/11/in-defence-of-strong-ai-chinese-room.html">what I think of that argument</a>!<br />
<br />
In the <a href="http://rationallyspeaking.blogspot.co.uk/2013/08/computation-church-turing-and-all-that.html">second</a> post, he uses an <a href="http://plato.stanford.edu/entries/church-turing/">article by Jack Copeland</a> which undermines the position of philosophers who take the <a href="http://en.wikipedia.org/wiki/Church%E2%80%93Turing_thesis">Church-Turing thesis</a> to imply that a Turing machine can do any computation performable by any machine. Unfortunately, Pigliucci seems to fail to realise that while Copeland is technically correct, the philosophers he criticises are no less so unless some very unlikely propositions turn out to be true (i.e. that the laws of physics are not computable and the brain is a <a href="http://en.wikipedia.org/wiki/Hypercomputation">hypercomputer</a>).<br />
<br />
But what motivated me to write this post is the same tired old argument that keeps coming up again and again in these discussions, the analogy to photosynthesis.<br />
<br />
<a name='more'></a>Pigliucci is a <a href="http://en.wikipedia.org/wiki/Biological_naturalism">biological naturalist</a>, which means that he thinks that although the brain is essentially just a machine that obeys the laws of physics like any other ball of matter, there's something unknown which is intrinsic to the biology of brains which gives rise to consciousness. In other words, you probably can't have consciousness without biology, and certainly not until we understand that biology well enough to physically mimic it. In other words, computation is not enough.<br />
<br />
He likes to explain this position and why he doubts the computational theory of mind by analogy to photosynthesis. This analogy came up repeatedly in <a href="http://bloggingheads.tv/videos/2561">a discussion with the wonderful Eliezer Yudkowsky on bloggingheads.tv</a>, despite Yudkowsky's efforts to explain why it doesn't work (efforts which in my view succeed entirely).<br />
<blockquote>
<b>Massimo Pigliucci: </b>... photosynthesis [...] is very well understood at a chemical level [... and at ...] the logical processing level. </blockquote>
<blockquote>
[...]<br />
<br />
You can in fact simulate those chemical reactions inside a computer. [...] The fact is of course that you can simulate all you want, the one thing you're not going to get out of the simulation is sugar, which is the outcome of photosynthesis. And the reason for that is because what's important about photosynthesis is not just the logical structure of the process, it's the actual physical particular implementation. Unless you have certain kinds of chemicals that work in a certain way [...] you just don't have photosynthesis.</blockquote>
<div style="text-align: right;">
- Edited from video discussion <a href="https://docs.google.com/document/d/1S-7CWOLOtLRDmMiS7LtVxELssUi3OI1-UcrPAzGMuH4/pub">transcript</a></div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
This harks back to arguments by John Searle and others making joking statements to the effect that we don't need to call the fire department if a simulated fire has got out of control, or that a simulated hurricane isn't likely to flood your apartment.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
These arguments are all the same and they are all very poor.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
The reason is simple. The brain is an information processing machine, and so the only product the brain needs to make reliably is <u>output</u>. Output is not a physical product, it is information. Like a novel, which can be paperback or hardback. audio book or ebook, as long as the content is the same then nothing essential has changed.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
When I pointed this out to Massimo, he responded with astonishment as though I claimed that the brain did not produce any physical chemicals at all.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Of course this is not my position. I am well aware that the brain produces many hormones, neurotransmitters and other chemicals that have an impact on the brain and indeed the body at large. However, as long as we keep in mind the self-evident proposition that the job of the brain as an information processing device is to <u>process information</u>, the chemicals are not important as ends in themselves (unless consciousness is a chemical!). If the brain were replaced by a perfectly equivalent electronic device, these chemicals would not be produced but the information processing would be unaffected. Whatever functions the chemicals had once implemented would have been replaced by an electronic analogue.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
In photosynthesis, the product is sugar -- a physical substance. Of course you can't get physical real-world sugar from a simulation of photosynthesis. The best you can hope for is virtual sugar.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
So why don't we say that the best we could hope for from a simulated brain is virtual output? It should be clear that that doesn't really make sense. Unlike virtual sugar, virtual output is pretty much the same thing as real output, and can be put to real world use. Information cannot be virtual because it is not physical in the first place.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
But output is not Massimo's concern. He is concerned with consciousness. Perhaps he thinks that the brain's job is to produce consciousness, or perhaps consciousness is just a necessary part of its information processing job. In either case, unless consciousness is a physical substance or a state of physical matter, (notions as outmoded as the vitalist assumption that a substance called élan vital is responsible for life), he and his colleagues have no business making analogies to photosynthesis, hurricanes or fire.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
According to the computational theory of mind (or strong AI as I have been referring to it in previous posts), consciousness is not a physical substance but an aspect of certain sophisticated ongoing computations. If this is correct, then consciousness achieved in a simulation is every bit as real as that achieved in a physical brain, for the same reason that a simulated calculator is still a calculator.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
Quite simply, there is no such thing as a simulated computation. Once you're simulating a computation, you're doing the computation.</div>
<div style="text-align: left;">
<br /></div>
<div style="text-align: left;">
So please, please, please stop making analogies to physical processes when there is <a href="http://disagreeableme.blogspot.co.uk/2013/07/strong-ai-conclusion.html">every reason</a> to believe that consciousness is not a physical phenomenon.</div>
</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com9tag:blogger.com,1999:blog-5801278565856116215.post-52220836868090040902013-07-19T00:27:00.000+01:002013-07-19T00:27:54.743+01:00Attacking the Evolutionary Argument Against Naturalism<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/commons/9/9a/Alvin_Plantinga.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" height="240" src="http://upload.wikimedia.org/wikipedia/commons/9/9a/Alvin_Plantinga.jpg" width="320" /></a></div>
My last post presented what I feel is the strongest form of the Evolutionary Argument Against Naturalism (EAAN) promoted by Alvin Plantinga.<br />
<br />
Today I will attempt to refute it.<br />
<br />
Rather than argue as some have that we can rely on our reason because of tools developed over centuries by a process of cultural evolution, I want to show that even the primitive faculties provided to us by evolution have given us abilities in this regard.<br />
<br />
Unlike Plantinga, I certainly do not accept the premise that evolution and naturalism imply a low probability that we can reliably form beliefs.<br />
<br />
<a name='more'></a>Let's recall Plantiga's core premise (P1), regarding the probability (P) that we are reliable (R) given naturalism (N) and evolution E).<br />
<blockquote class="tr_bq">
<b>P1: P(R|E&N) is low - (there is only a low or negligible probability that we have the ability to reliably form true beliefs, given philosophical naturalism and evolution)</b></blockquote>
<div>
Plantinga's argument hinges on the observation that evolution only cares about adaptive behaviours, not whether the beliefs that are associated with those behaviours are true or false. This is true, however it doesn't help to establish P1 unless he can show that true beliefs are no more likely to be adaptive than false beliefs.</div>
<div>
<br /></div>
<div>
This is where I disagree with him. I think that true beliefs are <i>far</i> more likely to be adaptive, and this is why evolution would indeed select for the ability to form true beliefs.</div>
<div>
<br /></div>
<div>
Plantinga makes two major arguments to the contrary. Firstly (and most plausibly, in my view), he outlines the argument I spelled out yesterday. There are many possible motivations for any given adaptive behaviour, but only a tiny minority of these motivations will reflect a true, relevant belief about the world. Since evolution only cares about behaviour, the odds that a true motivation should be chosen seem slim. I will answer this later.</div>
<div>
<br /></div>
<div>
Firstly, I want to get the other major argument out of the way, because it has been brought to my attention that he tends to use it quite a lot when his other arguments are challenged. This latter argument seems to me to be quite obviously incorrect, so I will attempt to dispense with it as finally as I can.</div>
<div>
<br /></div>
<div>
I do not think I am being too unfair when I characterise this argument as the position that beliefs can have no effect on behaviour in a naturalistic world.</div>
<div>
<br /></div>
<div>
<span style="font-size: large;">Of Course Beliefs Affect Behaviour</span></div>
<div>
<br /></div>
<div>
Of course beliefs affect behaviour! I find it extremely difficult to conceive of how someone could argue otherwise - after all, why would they argue that beliefs don't affect behaviour (this act of arguing being a behaviour in itself) if they were not motivated by their beliefs in that regard? If Plantinga seriously believes that naturalism does not allow for beliefs to affect behaviour, then he need not bother with EAAN. This should be his argument against naturalism.</div>
<div>
<br /></div>
<div>
The main motivation behind these views is the problem of reconciling the non-physical mental content of our minds with the physical structures comprising our brains. In the naturalistic world view, no non-physical event could ever have an effect on the material world, so it seems to be impossible to conceive of how it could be that something as insubstantial as a belief could set in motion the neural firings that lead to action and behaviour.</div>
<div>
<br /></div>
<div>
It should be said at this point that Plantinga himself need not be committed to this view, and indeed I believe he is not, since he is not a naturalist. He believes that the supernatural can indeed intervene in the physical world, so there's no problem as long as one accepts that the mind is not a purely natural phenomenon. Instead, he argues that a true naturalist should be committed to this position.</div>
<div>
<br /></div>
<div>
One solution to this problem is the suggestion that beliefs and other mental phenomena are not causative of behaviour but merely correlated with it. It could be that physical processes in the brain give rise to both beliefs and behaviours, but that the beliefs are a side effect and not causative of the behaviours. The sense that we experience mental phenomena and then act on those phenomena is an illusion. In reality, our behaviour is already determined by physical neural activity and the corresponding mental experience is just a story we tell ourselves. We are conscious automata. Beliefs, like free will, are an illusion.</div>
<div>
<br /></div>
<div>
Plantinga accepts this interpretation, as it sits well with his belief that the content of beliefs would have no effect on behaviour in a naturalistic world. Since beliefs are only the story we tell ourselves to explain our automatic actions, there is no reason to expect these beliefs to be true. As long as our behaviour is adaptive, our beliefs are irrelevant.</div>
<div>
<br /></div>
<div>
I feel this interpretation is valid to a point, but it's not the full story. I don't believe in free will (at least not in the old fashioned libertarian sense of the world), so in a sense everything we do really is determined by physical processes. It even seems valid to me to view mental phenomena as merely correlates of physical processes, but that's just one way of looking at it. </div>
<div>
<br /></div>
<div>
(It's also fine in my view to see mental phenomena as actually causative. Neither interpretation is more correct than the other, it just determines on how one interprets concepts such as causation.)</div>
<div>
<br /></div>
<div>
But Plantinga takes it too far. He seems to think that since only the physical neural correlates of beliefs (e.g. the particular pattern of neural activity corresponding to that belief) matter from the point of view of evolution, therefore the content or meaning of that belief is irrelevant.</div>
<div>
<br /></div>
<div>
This is simply not true. What he seems to fail to appreciate is that the content of the belief is not independent of the particular physical pattern of neural activity, but that it is intimately bound to it. Different beliefs will correspond to different patterns of neural activity and vice versa. If evolution can select for adaptive physical brain states, then the corresponding mental phenomena are also being selected for implicitly. It really is not too much of a stretch to say that evolution is actually selecting for particular beliefs.</div>
<div>
<br /></div>
<div>
If you want more concrete examples of non-physical "epiphenomena" having a real impact on the physical world just look at the stock market, computer software, fashion trends. Examples abound.</div>
<div>
<br /></div>
<div>
The only question therefore is whether the beliefs evolution selects for are likely to be true or not.</div>
<div>
<br /></div>
<div>
<span style="font-size: large;">The Adaptivity of Truth</span></div>
<div>
<br /></div>
<div>
My previous post outlined an argument that there are many more false beliefs that can lead to appropriate adaptive behaviours than true beliefs, and so we should not expect the probability that evolution selects for true beliefs to be high when it selects for the associated behaviours - far more likely that evolution will select a false belief that happens to lead to an adaptive behaviour. This argument may seem superficially convincing, but it is deeply flawed, as I will attempt to show.</div>
<div>
<br /></div>
<div>
When discussing the selection of behaviours, there are actually two quite distinct modes of behaviour selection to consider.</div>
<div>
<br /></div>
<div>
1) The <b>direct</b> selection of specific adaptive behaviours in response to specific stimuli</div>
<div>
2) The <b>indirect</b> selection of adaptive behaviour by instead selecting for the ability to dynamically determine appropriate courses of action in response to experience and environmental conditions.</div>
<div>
<br /></div>
<div>
Many behaviours are due to the first kind of selection. We call these behaviours instinctive. All animals exhibit behaviours of this sort, and most exhibit <u>only</u> behaviours of this sort. They manifest as drives to engage in specific activities, but they are not motivated by beliefs about the world.</div>
<div>
<br /></div>
<div>
When a spider hunts down and kills a fly, it does not do so because it believes the fly will be tasty, much less because it believes it needs to eat to live. It does so because this behaviour has been hard-wired into it by evolution.</div>
<div>
<br /></div>
<div>
The same is true when you stand on a glass floor above a vertiginous drop. You may feel fear, but if I ask you whether you believe you will fall, you will deny it. Your discomfort comes from an instinctive response, not a belief. Even if you characterise this instinct as a non-propositional belief, as some do, it's entirely different from the kinds of beliefs Plantinga is talking about.</div>
<div>
<br /></div>
<div>
Plantinga would probably agree with this so far, but ask what this has to do with his argument. Well, I'm about to get to that.</div>
<div>
<br /></div>
<div>
When Plantinga discusses the possible beliefs that evolution might select for in order to produce a desired behaviour (e.g. running away from a tiger), he is making the mistake of treating these beliefs as if they are selected for directly, like instincts. Evolution requires a specific response to a specific stimulus. In order to get the appropriate result, beliefs are simply not required. This scenario is what instincts are for. You don't need a <i>reason</i> to run away from the tiger, you just need to run away from the tiger! You don't have to know that the tiger is going to cause you pain and death, you only need to feel fear and the impetus to get as far away from it as possible.</div>
<div>
<br /></div>
<div>
And yet we do have beliefs and the ability to reason. This allows us to surpass the crude stimulus/response behaviours allowed by instinct alone and so to behave far more intelligently (and so adaptively) than creatures such as spiders. Our ability to deliberate and choose our behaviour allows us to adapt to situations that could never have been foreseen by evolution.</div>
<div>
<br /></div>
<div>
The indirect mode of behaviour selection has come into play.</div>
<div>
<br /></div>
<div>
A hominid running away from a tiger quite likely <i>does</i> have true beliefs about the consequences of being caught. Humans and certain other animals form such beliefs every day, and their behaviour is motivated by the interplay of instinct with this dynamic and ongoing generation of beliefs. The correct question to ask is not, as Plantinga does, whether there are more true or false beliefs that could explain a particular behaviour, but rather to ask whether a reliable system for generating beliefs is more likely to lead to adaptive behaviours than an unreliable one.</div>
<div>
<br /></div>
<div>
Once the problem is explained in this way, it seems perfectly obvious to me that a reliable system would be more adaptive. The behaviours motivated by the beliefs generated by a completely unreliable system are likely to be effectively random, as there will be little correlation between reality and the generated beliefs. It's very unlikely that such a system would confer any evolutionary advantage whatsoever.</div>
<div>
<br /></div>
<div>
Yes, of course such a system <i>could</i> produce adaptive behaviours on occasion, such as in Plantinga's tongue-in-cheek example of the hominid that seeks to pet a fearsome tiger by running away from it, but such unlikely scenarios will certainly be outnumbered by examples of maladaptive behaviour. The same hominid is likely to run away from food, mates and other items of interest. Or suppose he is preoccupied and has no interest in petting tigers on this occasion, so he remains seated as the tiger approaches.</div>
<div>
<br /></div>
<div>
Only the beliefs generated by a reliable system will have any meaningful correlation with reality, and only beliefs which correlate with reality can consistently produce adaptive behaviour and so evolutionary fitness.</div>
<div>
<br /></div>
<div>
In conclusion, there is every reason to suppose that evolution has gifted us with some basic ability to form reliable beliefs in everyday scenarios. Given this basic foundation, we have over many generations built a great number of tools which allow us to perceive truths which were previously hidden from us. We are far wiser now than we were when we had only our evolutionary heritage to draw on, but all of our intellectual achievements to date have as their root the ability, selected for by naturalistic evolution, to form true beliefs reliably.</div>
</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com41tag:blogger.com,1999:blog-5801278565856116215.post-46154178380336283912013-07-17T20:22:00.000+01:002013-07-17T20:22:09.431+01:00Defending The Evolutionary Argument Against Naturalism (EEAN)<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/commons/thumb/0/07/AlvinPlantinga.JPG/220px-AlvinPlantinga.JPG" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/thumb/0/07/AlvinPlantinga.JPG/220px-AlvinPlantinga.JPG" /></a></div>
As part of a disagreement with philosopher <a href="http://rationallyspeaking.blogspot.co.uk/2013/07/plantingas-evolutionary-argument.html">Massimo Pigliucci on Rationally Speakin</a>g, I have found myself in the unusual position of having to defend Alvin Plantinga's <a href="http://en.wikipedia.org/wiki/Evolutionary_argument_against_naturalism">Evolutionary Argument Against Naturalism.</a><br />
<br />
As a firm proponent of philosophical naturalism, I do not find the EAAN to be persuasive. However, I feel that Pigliucci has treated the argument unfairly, glossing over the crucial point and picking at irrelevant details.<br />
<br />
In this post, I will attempt to defend the argument and present it as persuasively as I can. In a later post, I intend to refute it.<br />
<br />
<a name='more'></a>Let's define:<blockquote class="tr_bq">
<ul style="text-align: left;">
<li>N as naturalism - in this context the idea that there is no such person as God or anything like God</li>
<li>E as the belief that human beings have evolved in conformity with current evolutionary theory</li>
<li>R as the proposition that our faculties are "reliable", where a cognitive faculty is "reliable" if the great bulk of its deliverances are true.</li>
</ul>
</blockquote>
<div>
First, Plantinga explains why we should not expect creatures who have evolved naturalistically to be able to reliably form true beliefs, i.e. that the probability of R given E and N is very low. </div>
<div>
<br /></div>
<div>
Now, if he's right about this (and I don't think he is), he is correct that naturalism becomes untenable. The argument is rather simple.</div>
<div>
<br /></div>
<div>
If we are unlikely to have the ability to form true beliefs, then none of our beliefs are reliable, and so neither is the theory of evolution itself, which is a product of those beliefs. So if you believe in naturalism, it is irrational to believe in evolution.</div>
<div>
<div>
<br /></div>
<div>
Plantinga's resolution is to accept evolution and to reject naturalism, as accepting naturalism while rejecting evolution is not tenable - there's no credible alternative to evolution and the supporting evidence is overwhelming.</div>
</div>
<br />
<div>
All of this is fine as far as I'm concerned, once you accept his core premise.</div>
<div>
<br /></div>
<div>
<span style="font-size: large;">The Core Premise of EAAN</span></div>
<blockquote class="tr_bq">
<b>P1: P(R|E&N) is low - (there is only a low or negligible probability that we have the ability to reliably form true beliefs, given philosophical naturalism and evolution)</b></blockquote>
<div>
Plantinga acknowledges four distinct possibilities regarding the interaction of beliefs and adaptive behaviours.</div>
<div>
<div>
<ol style="text-align: left;">
<li>Beliefs do not affect behaviour.</li>
<li>The semantic content or meaning of beliefs do not affect behaviour, only their physical representation in the brain (which could potentially represent other meanings).</li>
<li>Beliefs tend to affect behaviour but are maladaptive.</li>
<li>Beliefs tend to affect behaviour adaptively, but can be true or false.</li>
</ol>
</div>
</div>
<div>
For each of these possibilities, Plantinga explains why P1 would hold.</div>
<div>
<br /></div>
<div>
However, I will only address the last, as it is the only one I deem to be plausible and compatible with philosophical naturalism.</div>
<div>
<br /></div>
<div>
<div>
Pigliucci presents the following premises, in my view correctly summarising Plantinga's assumptions.</div>
<div>
<blockquote class="tr_bq">
1. Our beliefs about the world can only have evolutionary consequences if they affect our behaviors (otherwise they are invisible to natural selection);<br />2. Natural selection favors advantageous behaviors, not directly the ability to form true beliefs;<br />3. Natural selection has no way to favor true non-adaptive beliefs over false but adaptive beliefs.</blockquote>
</div>
</div>
<br />
<div>
So, it seems clear that beliefs have no direct bearing on evolution, only the behaviours that result. When we see an animal fleeing a predator, it doesn't matter why the animal thinks it is running away as long as it escapes to reproduce and pass on its genes. Any beliefs that produce this behaviour will be sufficient, and they need not be true.</div>
<div>
<br /></div>
<div>
The classic example Plantinga usually gives is that of an early hominid escaping from a tiger. We might expect, on witnessing this event, that the hominid is running away because it correctly believes the tiger will kill and eat it (and that it is a bad thing to be killed and eaten).</div>
<div>
<br /></div>
<div>
However a number of alternative false beliefs could have produced the same behaviour.</div>
<div>
<ul style="text-align: left;">
<li>It is good to be killed and eaten - however this tiger appears too weak. I should run away to find a more ferocious tiger!</li>
<li>What a cute animal! I must approach it and pet it! Therefore, I shall run in the opposite direction, as this is the most efficient way to approach it.</li>
<li>A tiger! The signal that the race has started! I must be quick if I am to win!</li>
<li>What a happy day! A tiger! Everyone knows that it is good luck to see a tiger, but you have to acknowledge this by running away.</li>
<li>It's a fearsome dragon! I must escape!</li>
</ul>
<div>
The possibilities are endless. There is only one true belief which will lead the hominid to escape, but limitless false beliefs. As evolution doesn't care what the hominid believes so long as it escapes, it seems improbable that it would happen to find the one belief out of the infinite possibilities that both happens to be true and leads to the correct behaviour. In fact, the probability that the homind has the correct belief would seem to be negligible.</div>
</div>
<div>
<br /></div>
<div>
And this is true in general. There is no situation where evolution would have preferred a true adaptive belief to a false adaptive belief, and so no reason to expect that a naturalistically evolved creature should ever form a true belief except as a case of extreme coincidence.</div>
<div>
<br /></div>
<div>
As such, though we should expect any naturalistically evolved creature to behave in an adaptive and apparently intelligent way, the reasons for these behaviours are likely be completely crazy, as there is no incentive for evolution to prefer sanity.</div>
<div>
<br /></div>
<div>
A belief that we have evolved in a naturalistic universe therefore leads inevitably to the conclusion that we are all insane, and all of our beliefs are probably inconsistent, random and untrue, while we remain oblivious to this and thrive regardless.</div>
<div>
<br /></div>
<div>
Plantinga then goes on as outlined above to defend our sanity by rejecting naturalism. This rejection makes tenable the position that we are sane, as some supernatural force may have intervened in evolution and selected for sanity rather than mere adaptability.</div>
<div>
<br /></div>
<div>
<span style="font-size: large;">Defending P1</span></div>
<div>
<br /></div>
<div>
A common response to this argument, and that favoured by Pigliucci, is that evolution is not the only mechanism by which we have acquired the ability to form beliefs. Our millenia of civilisation have allowed us to develop sophisticated cognitive tools to overcome our evolutionary shortcomings. We have developed science, mathematics, philosophy and logic to augment our natural abilities, and these tools have proven to be reliable by the wonderful technologies they have allowed us to develop and the many correct predictions we have made about the natural world.</div>
<div>
<br /></div>
<div>
We did not simply evolve the ability to do quantum mechanics - we are genetically pretty similar to Cro-Magnon man but we are light years ahead in understanding and ability. These abilities have arisen due to the accumulated experience of countless generations of our ancestors, passing along ideas that work and discarding those that don't. There's more than natural selection at work - cultural evolution comes into play also, and cultural evolution does indeed favour true beliefs over false ones.</div>
<div>
<br /></div>
<div>
The problem with this argument is that all of the tools that we have acquired to help us form true beliefs can only be trusted if our biological inheritance of belief-generating apparatus is reliable. The ability to do quantum mechanics is certainly not innate, but the potential to learn and develop something so abstract - afforded by our instinctive intuitions regarding logic and truth - certainly is.</div>
<div>
<br /></div>
<div>
If we are profoundly deluded, then any system we have developed to help us cope with our delusion must also be suspect, no matter how convincing it appears to us. Any argument to the contrary is like a madman in an insane asylum insisting he is perfectly sane because the voices in his head tell him so.</div>
<div>
<br /></div>
<div>
There is no escaping this conclusion, as any argument to the contrary must necessarily depend on the rationality it seeks to establish. You can point at all of our achievements and claim that our track record shows our beliefs must be correct, but perhaps this conclusion simply isn't borne out by this evidence. We may be incorrect in all of our basic assumptions about logic and argument, or perhaps we have achieved nothing, and are merely dreaming of being masters of technology. No conclusion is possible if our innate rationality is in doubt, including the conclusion that the scientific method or our philosophical heritage have equipped us with the ability to form true beliefs.</div>
<div>
<br /></div>
<div>
In my view, P1 overcomes this criticism unscathed. In my next post, I'll attempt to defeat it with a criticism from another angle.</div>
</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com13tag:blogger.com,1999:blog-5801278565856116215.post-3213813808819717282013-07-11T23:03:00.003+01:002013-07-11T23:06:34.008+01:00Strong AI: Conclusion<div dir="ltr" style="text-align: left;" trbidi="on">
I've taken a long hiatus while I tried to think if there was anything substantive I had left to say about Strong AI. I didn't come up with much, so I'll wrap it up with this post.<br />
<br />
<a name='more'></a>I think I have shown that there is no good argument against Strong AI, the view that any sufficiently intelligent and self-reflecting computer system would be conscious in the same sense that we are, or, put in other words, that consciousness is fundamentally a computational process.<br />
<br />
The arguments against this view come in a few different flavours.<br />
<br />
There are those, such as John Searle, who seem to misunderstand Strong AI proponents as suggesting that physical components of computers can be conscious. He takes the idea of a "conscious computer" too literally to mean a physical object. Instead, it is my view that it is not physical objects (brains) which are conscious but computational processes (minds).<br />
<br />
Some arguments are effectively restatements of the hypothesis that Strong AI is false, as in the philosophical zombie thought experiment. These arguments prove nothing but act to articulate and evoke the (false!) intuition that Strong AI is incorrect.<br />
<br />
And then there are those arguments, such as the modal argument presented by Alvin Plantinga, that miss the point entirely.<br />
<br />
I have not come across any argument specifically against Strong AI that is in any way convincing.<br />
<br />
Unfortunately, the same cannot be said for Weak AI. The Gödel argument as popularised by Roger Penrose is much more formidable, and would seem to suggest that it is impossible to construct any truly intelligent system.<br />
<br />
To account for human intelligence, Penrose resorts to a belief in exotic uncomputable physics for which there is no evidence at present and yet which must have a profound effect on the brain. Others have used the same argument to claim that Gödel has proven the existence of the soul. In any case the conclusion that Weak AI is impossible is incompatible with physics as we know it, given that humans serve as examples of intelligent physical systems.<br />
<br />
If we assume that physics as we know it is broadly correct, then the Gödel argument against Weak AI must be wrong, probably because we are not the transcendentally intelligent beings we think we are, but merely operate within the limits of what Gödel allows.<br />
<br />
And while there are other arguments for AI which appeal to me, such as the argument from evolutionary parsimony, it is this assumption of the primacy of physical laws that is most convincing to me.<br />
<br />
If physical laws govern the universe entirely, then human beings must be entirely driven by physics. Given that human beings are conscious, intelligent physical systems, it is obviously possible for physical systems to be intelligent and conscious. Of systems designed by humans, only computational processes seem to approach any semblance of intelligence. Furthermore, neuroscience has revealed that the brain does indeed seem to be carrying out a computational process of astonishing complexity, and that disruption of this process leads to impaired intelligence and/or unconsciousness.<br />
<br />
In the face of this evidence, it is perfectly obvious to me that human intelligence is fundamentally computational in nature. Even if this were not the case, no credible alternative has been proposed. Even the idea that we have souls doesn't explain anything, as there is no account of how souls make decisions or influence the physical world, or why the ability of the mind to function correctly is so intimately tied to the condition of the physical brain.<br />
<br />
I believe in Strong AI because if we accept the basic premise that the universe is governed entirely by physical laws, as it certainly seems to be, then any other conclusion seems to be perverse.</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com6tag:blogger.com,1999:blog-5801278565856116215.post-25030090176332194372013-02-20T20:29:00.000+00:002013-07-11T20:40:01.268+01:00Strong AI: The Gödel Problem<div dir="ltr" style="text-align: left;" trbidi="on">
<div class="separator" style="clear: both; text-align: center;">
<a href="http://upload.wikimedia.org/wikipedia/commons/4/42/Kurt_g%C3%B6del.jpg" imageanchor="1" style="clear: left; float: left; margin-bottom: 1em; margin-right: 1em;"><img border="0" src="http://upload.wikimedia.org/wikipedia/commons/4/42/Kurt_g%C3%B6del.jpg" /></a></div>
In my previous post, I alluded to an argument made by <a href="http://en.wikipedia.org/wiki/John_Lucas_(philosopher)">John Lucas</a> and popularised by <a href="http://en.wikipedia.org/wiki/Roger_Penrose">Roger Penrose</a> against the possibility of AI, and this is that <a href="http://en.wikipedia.org/wiki/Kurt_G%C3%B6del">Gödel</a>'s <a href="http://en.wikipedia.org/wiki/G%C3%B6del%27s_incompleteness_theorems">Incompleteness Theorems</a> suggest that even weak AI is impossible. We could never make a computer system that behaved intelligently.<br />
<br />
Loosely put, Gödel's Incompleteness Theorems put fundamental limits on what can be achieved by working with formal systems.<br />
<br />
In particular, some argue that they show that formal systems can never be as intelligent as human beings. In this post, I argue against this view and propose that humans are subject to the same limitations.<br />
<br />
<br />
<a name='more'></a><span style="font-size: large;">The Lucas/Penrose argument</span><br />
<br />
Any system comprising a syntax and the available operations that may be applied to statements is called a formal system. Arithmetic is one such system. It describes how we may express statements about numbers and what operations we can perform to get from one true statement to another (e.g. divide both sides of an equation by the same amount).<br />
<br />
There are a number of interesting results from these theorems, but most importantly for our purposes, Gödel proved that either these systems are inconsistent (e.g. can prove mutually exclusive alternatives) or there would be statements expressed in the language of these systems which are true but which could never be proven within the rules of these systems.<br />
<br />
The way he did this is ingenious and complex, and I don't pretend to fully understand it. It can be explained in terms of a trick similar to the well-known <a href="http://en.wikipedia.org/wiki/Liar_paradox">liar's paradox</a> - "this sentence is a lie".<br />
<br />
Gödel found a way to take the statement "This statement is not provable in this formal system" and encode it so as to express it as an equivalent mathematical statement within that formal system. To prove this statement true within the formal system would be to contradict that statement, and therefore it must not be possible to do so, and so the statement must be true.<br />
<div>
<br />
By a simple step of logic, we humans can see and prove that the statement is true but that it could never be proven to be true within the formal system.</div>
<div>
<br />
However, a computer program can be thought of as a formal system. In this case, the syntax describes various states in the computer's memory, while the algorithm implemented by the program can be thought of as rules to express how the computer program's state can evolve over time.<br />
<br />
It appears that we humans, using our ineffable ingenuity, have done what a formal system such as a computer cannot. It follows, according to Lucas and Penrose, that no computer could ever be truly intelligent, and that we humans are more than mere computing machines.</div>
<div>
<br /></div>
<div>
I don't agree with this interpretation.</div>
<div>
<br />
<span style="font-size: large;">Refuting the argument</span><br />
<br />
For a given formal system, we could in principle build a computer program that would prove all the provable statements within that system. Gödel shows that this computer program will never be able to prove all true statements.</div>
<div>
<br /></div>
<div>
However, for any such statements we know to be true, it is possible to invent a more complex formal system which can prove these statements (e.g. by taking the Gödel sentence as an additional axiom). These more complex systems are still incomplete, of course. We can still create further true statements which cannot be proven even by these systems. </div>
<div>
<br /></div>
<div>
And yet, the point remains that if there is any statement we can prove to be true, but which cannot be proven to be true within a given formal system, there is in fact some more advanced formal system that can prove it to be true. It follows that for any statement we can prove to be true, there exists a computer program which can do likewise.</div>
<div>
<br /></div>
<div>
Now, if the mind is fundamentally computational in nature, then it corresponds to some unimaginably complex formal system. Perhaps the computer program that corresponds to Lucas/Penrose's mind is just complicated enough to prove anything that they can prove and no more. They have no reason to suspect that their minds are capable of proving all true statements, and so they have not shown that human minds are not formal systems.<br />
<br />
<span style="font-size: large;">The mathematician and the formal system</span><br />
<br />
The counter-argument to this view would be that if a mathematician were presented with a book documenting the formal system of her own mind, then she could construct a Gödel sentence for that system. The mathematician would be sure that the sentence is true, however the formal system could not prove it, therefore the mathematician's mind cannot be equivalent to that formal system.<br />
<br />
However, there's a big assumption hidden there, and that is that she could understand her own formal system well enough to construct a Gödel sentence for it. Constructing Gödel sentences is not easy, and gets harder the more complex a formal system becomes. There is no reason to believe that any human would be capable of producing a Gödel sentence for their own mind.<br />
<br />
In fact, Gödel's incompleteness theorem rather suggests to me that this is impossible, just as no computer program would be capable of generating its own Gödel sentence. To try to achieve either task is akin to trying to fit a rigid container entirely into another of the same size and shape.</div>
<div>
<br />
<span style="font-size: large;">Unprovably true</span><br />
<span style="font-size: large;"><br /></span></div>
<div>
But if mathematicians are formal systems, then there must be true statements of mathematical fact which we could never ever prove. What would this look like, and do we have any examples?<br />
<br />
Unfortunately, I cannot give you definite examples of such statements. In fact, this is necessarily impossible. We can never prove the unprovability of a statement which is true, because this leads to a contradiction. Either we can't know for sure that it's true, or we can't know for sure that it's unprovable, simply because we can't know that it's true without being able to prove it!<br />
<br />
However, there are many candidates for mathematical statements which we might suspect to be true but unprovable.<br />
<br /></div>
<div>
One of the simplest to understand is Goldbach's conjecture.</div>
<div>
<br /></div>
<div>
Goldbach's conjecture is that every even number greater than two is expressible as the sum of two prime numbers, e.g. 4 = 2+2, 6=3+3, 8=5+3, etc. No counter-example has ever been found, and so most mathematicians suspect it to be true, and yet nobody has managed to find a proof. This may be because proving it is impossible.</div>
<div>
<br />
There are many other such examples. Given the abundance of unproven conjectures in mathematics, I am at a loss to explain why Lucas and Penrose assume that Gödel proves that human thought is not computational rather than assuming that, just like all formal systems, we will never be able to prove all true statements.<br />
<br />
<span style="font-size: large;">Consistency</span><br />
<br />
There are a number of other problems with the Lucas/Penrose argument. One of them is that Gödel's theorem only applies to consistent formal systems, that is systems which cannot derive statements which are mutually contradictory.<br />
<br />
There is no reason to suppose that human beings are consistent. In fact they rather manifestly seem not to be! People commonly believe all kinds of things that are mutually contradictory. Errors in reasoning are common, and not even the most eminent mathematicians are infallible.<br />
<br />
Furthermore, even if a mathematician were perfectly consistent, Gödel's theorem does not apply unless she can prove she is consistent, which is a very tall order.<br />
<br />
<span style="font-size: large;">The Whiteley Sentence</span><br />
<br />
Just to show that you are no more immune to Gödel's trick than a computer, philosopher C.H. Whiteley proposed the following trick.<br />
<br />
Suppose I were to say to you: "You cannot prove this sentence".<br />
<br />
If you disagree with me, then you believe you can prove the sentence, which says that you cannot prove the sentence. A clear contradiction.<br />
<br />
So if you are consistent, then you must agree with me. You must believe something you cannot prove, which in itself is perhaps inconsistent.<br />
<br />
However, even though we both agree that you cannot prove the sentence, it seems clear to me that I can prove it by the very same logic I used to show that you could not.<br />
<br />
By Lucas and Penrose's logic, it seems that I have shown that my intelligence transcends yours, that my mind is somehow more perfect. Of course this is not so, as you could do precisely the same thing to me. If we are vulnerable to the same trick as formal systems, then why should we suppose that we are not formal systems?<br />
<br />
<span style="font-size: large;">Further Reading</span><br />
<a href="http://www.iep.utm.edu/lp-argue/">http://www.iep.utm.edu/lp-argue/</a><br />
<br />
Thanks to Fionntan Roukema and Andrew Picken for help with questions about Gödel's theorems.</div>
</div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com0tag:blogger.com,1999:blog-5801278565856116215.post-60455070598990588782013-01-28T20:33:00.000+00:002013-07-11T20:32:09.457+01:00Strong AI: Naturalism and AI<div dir="ltr" style="text-align: left;" trbidi="on">
<br />
<br />
In this article, I want to examine the issues presented by the problem of consciousness when examined from a naturalistic or physicalist point of view. What I present in this article is probably not a knock-down argument proving the computational theory of consciousness (Strong AI) within this framework. Rather it shows that the alternatives are rather problematic.<br />
<br />
<a name='more'></a><br />
When we assume a naturalist viewpoint, what do we mean when we say that consciousness exists?<br />
<br />
From this viewpoint, all real phenomena that exist in the world are the result of physical interactions.<br />
<br />
So one interpretation is that consciousness has a physical substance, perhaps some as yet undiscovered form of matter. This view dates back at least to <a href="http://en.wikipedia.org/wiki/Democritus">Democritus</a>, the ancient Greek philosopher who first proposed the concept of atoms. He believed that fire was composed of fire atoms, that water was composed of water atoms. He also believed in soul atoms.<br />
<br />
There is certainly much about fundamental physics that we have yet to learn. There are substances which remain undiscovered, for example the nature of dark matter. The idea that consciousness might have something to do with some as yet undiscovered form of matter such as this is similar to concepts explored in fictional works such as the <a href="http://en.wikipedia.org/wiki/His_Dark_Materials">His Dark Materials</a> trilogy and popular non-fiction works such as the <a href="http://en.wikipedia.org/wiki/The_Tao_of_Physics">Tao of Physics</a>.<br />
<br />
Yet if this were the case, then such matter would be quite fundamentally different from any other kind of matter physics has ever postulated. Elementary particles in physics can only be sensibly described in terms of physical law -- mathematical relations they have to each other and certain predictable physical behaviours they exhibit.<br />
<br />
Gravitons transmit gravity. Photons transmit light. Electrons transmit electricity. We might want to posit a "psychon" that confers consciousness in much the same way.<br />
<br />
Gravity, light and electricity can all be understood according to the mathematical relations that form the physical laws of the universe. Consciousness, on the other hand, is an intuitive concept we share but which we cannot adequately define -- not even in words and certainly not in terms of simple mathematical relations.<br />
<br />
Furthermore, even if we did discover the psychon, we could never be sure that it was responsible for consciousness. If we detected psychons in the brains of humans but not other animals, would that show that only humans were conscious or that only human brains have these particles? If we detected them in all animal brains but not in computers would that prove that computers were not conscious or would it simply show that psychons have something to do with organic brains?<br />
<br />
So perhaps we reject the idea of a psychon. There might be other mysterious applications of existing physical laws at work, particularly quantum mechanics. This view is held by people such as <a href="http://en.wikipedia.org/wiki/Roger_Penrose">Roger Penrose</a>, who with <a href="http://en.wikipedia.org/wiki/Stuart_Hameroff">Stuart Hameroff</a> has proposed that nano-scale structures within cells called <a href="http://en.wikipedia.org/wiki/Microtubule#Postulated_role_in_consciousness">micro-tubules</a> might be small enough to utilise quantum physics to affect brain function.<br />
<br />
Penrose's motivation arises from <a href="http://en.wikipedia.org/wiki/G%C3%B6del's_incompleteness_theorems">Godel's incompleteness theorem</a>, which he takes to imply that it is impossible for a system of algorithms to produce human mathematics (a not-uncontroversial interpretation to say the least!) and so the human mind cannot be computational. Introducing quantum weirdness is supposed to somehow allow us to transcend the limits of algorithmic processing and allow us to achieve human ingenuity.<br />
<br />
Even if he's right, however, quantum physics would therefore be something our brains exploit in order to have adaptive intelligent behaviour. It's much less clear how it can resolve the problem of why we have genuine subjective experience, qualia and so on.<br />
<br />
Once again, it's hard to see how quantum physics, which is defined in terms of mathematical relations and functions, can be used to explain something as nebulous and ill-defined as the intuitive concept of consciousness.<br />
<br />
There may be other hypotheses supporting the concept of consciousness as arising from basic physical principles, but they must all be subject to this criticism. Basic physical laws are mathematical. Consciousness is not defined in terms of mathematics. Therefore, it seems unlikely that we can account for consciousness in terms of fundamental physics.<br />
<br />
But there are plenty of things which do exist which are not defined in terms of fundamental physics, but which emerge from complex systems, built layer upon layer upon foundations which can be reduced to physical law only after much analysis.<br />
<br />
Life is a good example. Before science was able to grapple with the underlying mechanisms supporting life, a leading theory to explain it involved the positing of a substance called <a href="http://en.wikipedia.org/wiki/%C3%89lan_vital">élan vital</a> (vital impetus). It was supposed that this substance was responsible for life, and that it might even be possible to bring non-living things to life if the substance could be isolated.<br />
<br />
Like <a href="http://en.wikipedia.org/wiki/Phlogiston">phlogiston</a> and <a href="http://en.wikipedia.org/wiki/Luminiferous_aether">aether</a>, élan vital does not exist. Life is not explained as a direct consequence of fundamental physics, but as an indirect and emergent phenomenon arising from incredibly complex biological systems designed by evolution upon layers biological, material, chemical, atomic and physical.<br />
<br />
So there is no smallest physical unit of life. There is no clear distinction between living and non-living. Life is not a phenomenon which can be defined mathematically, and so it is not present at the level of basic interaction between fundamental particles.<br />
<br />
And yet it would seem perverse to claim that this implies that life does not exist at all. Rather, we recognise that many of the things which exist in the universe do so not as fundamental physical objects but as useful labels describing the high-level emergent phenomena we observe arising from the physical interactions taking place in complex systems.<br />
<br />
All phenomena which exist in the universe which cannot be sensibly studied at the level of fundamental physics are of this sort. Universities, market forces, evolutionary pressures, fashion trends, and even individual people are such entities. Consciousness is no different.<br />
<br />
Some day we might find an alien planet that has reproducing and evolving organisms. We would call this life (but not as we know it!), even if its chemistry had next to nothing in common with terrestrial biological chemistry. Like other such emergent phenomena, life is not tied to specific details of its low-level implementation.<br />
<br />
Consciousness is probably no different. But if this is so, the relationship between consciousness and fundamental physics is indirect. As such, it seems unreasonable to insist that consciousness has any crucial dependence on particular physical structures within the brain. Just as we can take an XBox game and change the details of the low level implementation so it will work on a Playstation, so can we do the same for any high-level emergent phenomenon.<br />
<br />
There seems to be no reason in principle why we could not find consciousness supported by a physical substrate entirely different from a human brain. If a computer program had software structures analogous to our physical neurons and neurotransmitters, and if that program appeared to exhibit all the signs of consciousness we recognise in ourselves, there is no reason to suppose that there would be any fundamental qualitative difference between our conscious experiences.<br />
<br />
If consciousness is emergent, then the physical details don't matter.<br />
<br /></div>
Disagreeable Mehttp://www.blogger.com/profile/15258557849869963650noreply@blogger.com2