Philosophical thoughts from an unapologetic contrarian
Tuesday, 29 March 2022
Strong Emergence and Free Will
In this post I will discuss whether the idea of strong emergence can help us to make sense of free will and a few related issues. These ideas have been on my mind lately largely due to the above discussion, but I've been thinking about this for a while.
Background
(You may want to skim or skip this bit if you're familiar with the terms weak and strong emergence, libertarian free will, compatiblism, etc. Or read on if you want to double-check my understanding of same. There's also a couple of useful analogies about tornados and boiling points of compounds that may be thought-provoking)
Emergence is the general idea that we can describe the behaviour of a system at different levels, and we see different types of behaviour at different levels. If we think about a tornado, at a low level there are only individual air molecules bumping around. If we think at an intermediate level, we can see there are regions of high pressure and low pressure, and directions of air flow. At a very high level we can think of the tornado as a single object with a certain shape travelling at a certain velocity. We can say the high level tornado emerges from the low level collisions of air molecules.
It is often useful to use different laws of nature to describe the different levels of behaviour. At a low level, we might use Newtonian laws to model the collisions between particles, or if we want to be very precise for a small number of particles we might even go so far as to use the Schrödinger equation. At an intermediate level, we might use the Navier-Stokes equations to model fluid dynamics. At a high level we might have mathematical models of the ways tornados tend to move and behave.
For tornados at least, the assumption is usually that these higher level descriptions are instances of weak emergence. This means that the higher level behaviour is in fact fixed by the facts and the laws at a low level. We imagine that an infinite intelligence knowing the low-level laws and the low-level state would be able to use those laws to simulate the entire system and so predict not only the behaviour of individual particles but of the whole tornado. The reason we need the higher level descriptions in practice is then pragmatic and not fundamental -- it is only that high level descriptions are vastly more computationally efficient than low level descriptions, and not that they are required in principle to model high level behaviour.
However, until we actually derive the high level from the low level in a computation of some sort, this remains an assumption. I'm pretty sure, for instance, that nobody has been able to calculate the boiling point of water at standard pressure from first principles of particle physics, because it is absurdly computationally difficult to do so with current technology and analytic techniques. We have faith that it should be possible in principle, but we cannot be sure until we do it.
An alternative logical possibility is that there is just a fundamental law of nature governing boiling points. This law might give the result that water boils at 100°C at standard pressure, superceding anything we might be able to calculate from particle physics. This would be an example of strong emergence -- where high level descriptions and laws do not depend only on the laws of physics governing particles, but also on distinct high-level laws that in some way overrule or at least act in concert with the low-level laws to govern high-level behaviour.
We have little reason to believe that there are such strongly emergent laws governing the boiling points of chemical compounds. Where strong emergence perhaps has the most purchase is in discussion around free will. We all feel like we have free will and are responsible for our own decisions and behaviour. However, at least at first, this seems to be in tension with the idea that our behaviour is determined by the low-level physics of particle interactions. There are three common broad approaches to resolving this seeming conflict.
Compatibilism: We can reconcile the two pictures. Free will exists and we are responsible for our actions. Free will is a weakly emergent phenomenon ultimately arising from from particle interactions.
Hard Determinism: There is no free will. We can divide this further into fatalism, where everything is pointless and reasons are fantasies, or something more like compatibilism, where we might deny free will but we still try to make sense of something like reasons and responsibility and decision-making.
Libertarianism: Our behaviour is not governed by low-level physics. We are responsible for our own actions in a way that cannot be reduced to lower levels of description.
Personally, I'm somewhere between compatibilism and hard determinism. I think our behaviour is weakly emergent from low-level physics, but even so we can make sense of concepts analogous to decision-making, personal responsibility and free will. In such a world, I think it's a matter of convention whether we describe it by saying that free will exists or that it doesn't, and I have no strong preference. It depends only on whether we mean "compatibilist free will" or "libertarian free will".
But it's the third option I want to discuss, and in particular how strong emergence does or does not help make sense of it.
Strong Emergence and Libertarianism
Perhaps a good place to start on this is the recent Mind Chat discussion at the top of this post between Philip Goff, Keith Frankish and their guest Helen Steward, who defends a thesis something like libertarianism (though she doesn't like the term, preferring to call herself an "agency incompatibilist"), making use of a strongly emergent idea of agent causation. I confess I have not yet had an opportunity to read her book, A Metaphysics for Freedom, but I have done my best to understand her viewpoint by following the aforementioned discussion and reading a little about it on reviews such as this. As such, I'm open to correction if I'm getting some of her approach wrong.
Steward's view can be regarded as an instance of strong emergentism, in that she argues that agents such as humans and animals are the causes of their behaviour in a way which cannot be reduced to low-level physics. On this view, the behaviour of agents can only be explained with reference to concepts such as reasons and desires, and we will not get a correct picture of reality by simulating the low-level laws of physics alone. She describes a world where you could perform such a low-level simulation successfully as mechanistic and Laplacian (after Laplace's demon), and says that we no longer have any reason to think that the universe is like that since Newtonian physics has been overturned.
As far as I can tell, the main motivation for denying that the universe is mechanistic is that we are obliged to believe in agent-level concepts such as free will, reasons, etc, while pointing at some difficulties in the compatibilist program for reconciling these vital concepts with a mechanistic universe.
We are obliged to believe in free will in part due to the intuitive difficulty in dispelling such notions, but more importantly because the structure of society depends on notions such as personal responsibility and agency.
The difficulties with compatibilism seem to be that overly simplistic models of decision-making in a compatibilist framework cannot make sense of various corner cases. For example, a compatibilist might say that we perform an action intentionally if the action is caused by our beliefs and desires. However, consider the following case proposed by Donald Davidson in 1973.
A climber might want to rid himself of the weight and danger of holding another man on a rope, and he might know that by loosening his hold on the rope he could rid himself of the weight and danger. This belief and want might so unnerve him as to cause him to loosen his hold, and yet it might be the case that he never chose to loosen his hold, nor did he do it intentionally.
According to Steward, given that we have such reasons to prefer something like strongly emergent agency causation, then the burden of proof should be on those who say that the universe is mechanistic.
I don't want to get into these motivations too much but I'll briefly say that just because society depends on the belief in agent causation doesn't really weigh in favour of its existence, any more than a society's dependence on a religious framework means that God exists. We may wish it existed, but believing something because we want it to be true is the informal fallacy of wishful thinking. Secondly, just because it is difficult to make sense of compatibilism in some corner cases doesn't mean that it is impossible to do so. Even if we cannot, it may still be the case that compatibilism works fine most of the time, which is perhaps all we need to build a functioning society. All-in-all, it seems to me that the reasons to believe in agency causation are extremely weak and we should disbelieve in it on grounds of parsimony alone. We have no need to postulate agency causation to explain any of our empirical observations, so we should not.
But setting all this aside, let's entertain the idea that agency causation does exist. Some interesting issues arise.
The Issues
Three interesting and related issues in particular have been raised in recent discussions I've followed or taken part in on this topic.
Is strong emergence compatible with naturalism?
Does strong emergence imply that our current understanding of physics is incorrect?
Does strong emergence provide a sound basis for libertarianism?
Strong Emergence and Naturalism
Proponents of libertarianism are often taken to be denying naturalism, the proposition that supernatural phenomena do not exist. This is particularly so if they talk of a role for a soul, a spirit or a non-material mind of some kind in decision-making. Many of them are happy to accept that they are proposing a supernatural world view, particularly if arguing from a theistic perspective.
This is not the case for Helen Steward. She believes that if agent causation exists, it is a natural phenomenon, and so libertarianism is compatible with naturalism in her view.
However I think we need some sort of criterion to distinguish what we mean by natural or supernatural. A believer in ghosts, prayer or telepathy could claim to be a naturalist by saying that these are natural phenomena. If we can call anything "natural", then "naturalism" seems to be an empty position. I've delved into this topic earlier and concluded that natural phenomena are those that can be said to be governed by, supervene on, or weakly emerge from laws of physics that are expressible mathematically. This means that, in my view, strong emergence is only compatible with naturalism if the laws on which it ultimately depends can be expressed mathematically. Even ghosts might be natural, but if so, they must be understood to arise somehow out of the laws of physics, whatever they may be. The hallmark of the supernatural is to suggest that some things are irreducibly mysterious and cannot be understood by the reductive methods of naturalistic science.
Note that I posit weak emergence as part of a naturalistic world view, but this doesn't rule out strong emergence. As long as the strongly emergent laws are expressible mathematically, then they can act as a basis for further higher level phenomena to emerge from weakly.
I think it is plausible for a mathematically robust strongly emergent law to govern the boiling of water. It's not too hard to imagine some absolutely objective criterion for what constitutes a molecule of water, and strongly emergent laws that describe how it interacts with other similar molecules and so fix the boiling point of water.
It's a little less easy to see how strongly emergent laws would apply to agents. A water molecule is pretty clear cut. There is relatively little ambiguity about what counts as a water molecule and what does not (though there is some, e.g. whether heavy water counts). But mathematically rigorous strongly emergent laws to describe the behaviour of agents would necessitate that there be objective criteria by which we could unambiguously say what counts as an agent and what does not. This needs to be absolutely clear, as the universe needs to "know" when to apply the laws for particles and when to apply the laws describing agents. But this seems implausible -- even Steward admits that it's not clear what should count as a mechanism and what should count as an agent, e.g. which category appluies to bacteria.
Once you've identified what counts as an agent, you then need to have some mathematically precise laws to describe how the agent behaves. This might take the form of some sort of algorithm, for instance. This is perhaps less of a problem, but it does lead to the rather unintuitive picture where the high-level algorithm a system is running directly influences the low-level "hardware" running that same algorithm. This kind of top-down causation should seem thoroughly alien to a computer scientist such as myself, but it's hard to rule it out as a logical possibility.
So strong emergence itself does seem to be compatible with naturalism, but agent causation seems a bit of a stretch.
Strong Emergence and the Core Theory
This topic came up in an earlier Mind Chat with Sean Carroll. It was a long and interesting discussion, but much of it was taken up with a somewhat unproductive impasse between Goff and Carroll on whether strong emergence would be in conflict with the core theory of modern physics, composed of the standard model of particle physics taken together with general relativity. They are due for a rematch on this topic in a few months, which I look forward to greatly. Briefly, Goff thinks that strongly emergent laws could co-exist with the core theory without overturning it, but Carroll disagrees. Carroll says that if strongly emergent behaviour exists in the universe, then physics as we know it is simply wrong. That doesn't mean of course that strongly emergent behaviour is ruled out, but he does want to be clear about what that would mean for our understanding of physics.
I think this is perhaps a semantic disagreement on what it means for a theory of physics to be incorrect. We might want to say that a theory is correct but incomplete if it has a limited domain of applicability. Nancy Cartwright (if I understand her correctly) has proposed that nature may be a patchwork of different domains wherein different laws apply, each law being correct within its own domain. This picture might help to reconcile issues such as the famous incompatibility of quantum mechanics with general relativity, were it not for the fact that there seem to be situations where the two domains overlap (such as inside black holes or near the Big Bang). Agent causation would fit well with this patchwork view, as it proposes a new domain wherein the core theory no longer quite applies and we need a theory that accounts for agent causation instead. The point being that the core theory would still be correct in its own domain.
The problem with this view is that the core theory purports to completely describe what all particles do everywhere in the universe. It doesn't recognise any limits to its domain. If the world is a patchwork, then the core theory is incorrect for this reason alone.
Another tack may be that the core theory and strongly emergent laws work together somehow. On this view, the core theory is indeed acting on the particles in our brain, it's just that there are also influences from strongly emergent laws. On this view, again the core theory is correct, just insufficient. But again, this amounts to refuting the core theory, because the core theory purports to be an exhaustive account of the influences on particles.
I'm more in sympathy with Carroll here. If strong emergence exists, then the core theory is wrong, just because the core theory as interpreted by physicists purports to be complete. Even so, if we accept Goff's position that a rejection of the core theory's pretentions to completeness does not amount to a refutation of the core theory itself, then there is a sense in which the core theory could still be correct in certain domains even if strongly emergent phenomena do exist. I suspect Goff (and Cartwright) may be underestimating the difficulty of making such a patchwork coherent though.
Strong Emergence as a basis for Libertarianism
I think there's a clear problem with naturalistic and strongly emergent agent causation as a basis for libertarianism, which is that if it is naturalistic, then it is necessarily mechanistic. We may not be able to explain our behaviour by working up from the low-level physics of particles, but we will be able to do so by working up from the higher-level laws of agent causation, whatever they turn out to be. As far as I can see, all the same issues arise here as arise here for agents weakly emerging from low-level physics. Both visions are ultimately equally mechanistic.
If behaviour weakly emerges from low-level physics, we can tell a story where this agent made a choice not because she was "bad" and "deserves to be punished" but because these particles interacted with those particles. If it emerges instead from laws of agent causation, then we can tell a story where the choice was because of the algorithm the agent was running. The exact same issues arise. Without agent causation, we have the problem that an agent is not ultimately responsible for the state or interactions of the particles driving her behaviour. With agent causation we can similarly say that the agent is not ultimately responsible for whatever the state is in the agent-causation picture. Just as with particle-level physics, the agent's behaviour is ultimately caused by mechanistic prior causes going back all the way to the Big Bang. Everything is still explained by mechanisms built upon the laws of physics. The only difference is what form the true description of the laws of physics take, and whether those laws of physics recognise her as an agent.
In the Mind Chat episode (1:55:22), Steward is posed a question from an audience member who may or may not be yours truly.
Can we simulate or model down top-down causation? If so how is this not deterministic? If not then it seems the rules or principles of top-down causation are not well defined.
Steward's response:
I'm not the sort of person who could write the program that's for sure! But I would imagine it could be simulated, and I'm a bit puzzled by the suggestion that that would make it deterministic because as I understand it simulation is often most valuable and useful in connection with systems that are thought to be indeterministic, or at least full of massive chaos
First of all, this is an unprepared off-the-cuff response, so we should be charitable. It looks like she has not considered this suggestion before. I assume that had I more time to explain the issues I perceive, and had she more time to explain her position, then we might do better to get to the heart of the issue.
In particular, we should not take her to task too much for conflating chaos and indeterminism. I completely agree with Keith's follow up explanation that chaos is not at all incompatible with determinism.
But I think the issue here is not really with determinism per se but with whether we can capture the behaviour of agents mechanistically. If we can indeed faithfully simulate it, even if we do so by introducing a little indeterminism in the form of pseudorandomness (or even real randomness coming from some sort of quantum measurement), then what we are doing is implementing agent behaviour mechanistically.
Perhaps Steward would want to say that though we could simulate agent causation, such a simulation would not in fact give rise to a true agent, but just a simulacrum of one which may be good enough for many purposes even if not perfect. If so, we're back to the same dilemma. Either we can make an absolutely perfect simulation of an agent, in which case agent causation is mechanistic, or we cannot, in which agent causation is not well-defined (and starts to look supernatural by my lights).
Perhaps Steward would say that a simulated agent would have behaviour indistinguishable from a true agent, but would nevertheless not be a true agent. In that case, it is possible to get perfectly agent-like behaviour without true agents. This means that we have no reason to suppose that true agents exist -- because any evidence we could find to support the existence of true agents would also support the existence of pseudo-agents. In particular, there would be no way for evolution to select for true agency in favour of pseudo-agency. Rather as in the philosophical zombie argument for phenomenal consciousness, the only reason we have for supposing that true agents exist at all is unevidenced personal conviction that we ourselves are true agents, which is rather unsatisfactory.
As such, it seems to me that we are forced to conclude that agent causation is either supernatural or redundant. Strongly emergent naturalistic agent causation does not work as a basis for libertarian free will.
Hi DM, I have to admit I haven't watched that discussion. (The videos are just too long. I do sometimes listen to them, when they eventually come out on audio.)
But I agree with just about everything you write here. I'm a bit more solidly in the compatibilist camp, but definitely think libertarian free will doesn't exist. Honestly, I'm not sure it's even coherent.
As you note, to exist, it can't operate according to any discoverable rules or principles. But that seems to imply something that would be completely random. (Otherwise we'd be able to discover some regularities in it, and hence rules.) Similar to trying to situate free will in quantum indeterminacy, it doesn't seem to provide anything praiseworthy or blameworthy.
I'm on leave now after moving back to the UK, so I have a bit more time on my hands to write posts and watch long videos! Back to work on Friday though, unfortunately.
We're probably on the same page, but this doesn't seem quite right to me.
"As you note, to exist, it can't operate according to any discoverable rules or principles"
I would say that to exist, and especially to do so naturalistically, it *must* operate according to discoverable rules or principles.
However to meet the goals of libertarianism, namely to free us from mechanistic explanations, then it seems it *can't* operate according to discoverable rules or principles. I think libertarians think they can get away with rules or principles expressed in high-level natural language, with talk of reasons and desires and so on, but I think these are too vague/ambiguous to be the basis of any fundamental laws. Reasons and desires must supervene on something more mechanistic in a naturalistic universe.
I’ve got to agree with Mike that the discussions that Goff and Frankish have are way too long, or at least for me. I suppose this is largely because I’m not sympathetic to idealism or illusionism. I awoke at 3am however and figured that listening in would either put me back to sleep or maybe I’d learn a bit more about why they’re so popular. Sleep never came however so I then listened to this post and comments on Speechify.
One explanation for their popularity might be that each of them seem quite charming and knowledgeable. Furthermore at one point it was implied that Goff’s idealism is founded in theism, which is of course popular given the prospect for an afterlife Heaven. Then beyond the charm and credentials of Frankish, I suppose illusionism is reasonably popular too.
Regarding libertarian free will, I wouldn’t actually say that it’s incoherent. More to DM’s point I’d say that it’s quite coherent under the metaphysics of supernaturalism. Conversely I’m what might be referred to as a “hard naturalist”. This is to say I believe that all events, and even quantum events, are ontologically predetermined to occur exactly as they do. It’s just that we often remain too ignorant to grasp what’s going on, and certainly in the case of QM. I was happy to hear recently that Mike sees things this way too.
Given that position one might say, “Ah, so that means you can’t coherently believe in free will”. Well actually I can coherently do so, though it’s epistemic rather than ontological. Here’s the key point to grasp — I consider free will to exist as a function of the scope of one’s perspective. Thus the more that is understood about a given person’s situation, the less freedom that should be seen. The only reason that we see and even feel free will should be given our vast ignorance about what motivates ourselves and others. It always surprises me that various respected people in academia seem not to profess this specific position.
They don't discuss illusionism or panpsychism that much on Mind Chat as they're more interested in exploring their guest's position.
Goff isn't quite a theist but more of an agnostic. He's increasingly sympathetic to some sort of intermediate theistic position: the possibility that there is some godlike entity out there but not the omnipotent, omniscient, omnibenevolent God of the usual monotheistic relligions. He will be writing about this idea in his next book.
If there is a difference between your take on free will and standard compatiblism then it seems pretty subtle! I guess it's this: that Dennett would say someone (Bob) has free will even if he (Dennett) understands Bob's situation completely, whereas you seem to be saying Dennett should say Bob has no free will if Dennett understands Bob's situation completely.
I'd agree with Dennett here. I agree with you (and so does Dennett), that the illusion of libertarian free will depends on incomplete information. But if someone has compatibilist free will then they have compatibilist free will no matter how much we know about them. They shouldn't suddenly become absolved of responsibility for their actions just because we can understand the processes in their brain that led to those actions coming about.
So Goff is more of a spiritualist or whatever than a theist? I suppose that’s more fitting in academia. And I see that you referred to him as a panpsychist when I was under the impression he was an idealist. I also see from a very brief look at Wikipedia that they imply he’s both. That’s fine.
It sounds like you’ve got my hard naturalism position straight DM, but let’s check into this further. From here everything that ever has or ever will happen may be considered predestined to occur exactly as it does, and this even references quantum and/or human function. So ultimately there is no freewill here and thus no good/evil behavior. Instead behavior may ontologically be considered good/bad (rather than good/evil) in respect to a given set of sentient interests. This would be a value dynamic, or all I consider to be valuable to anything, anywhere. And I wouldn’t quite say that Dennett or any human could perfectly comprehend human behavior simply by means of perfect information, but rather some sort of omniscient god might do so given the underlying causal dynamics (not that I believe in gods, but for illustration purposes).
When we conceptually leap from ontological existence to epistemic understandings however, that’s where I think blame and thus good/evil apply. Even though it may have been predestined that Hitler was going to do what he did (or any other “evil” person), from our relatively small perspectives we may still effectively blame him for his deeds, and even if we presume that he ultimately had no choice. The more that we understand about what caused him to do what he did, theoretically the less blame that we should see. Then from a perfect perspective Hitler may be considered no more blameworthy than a rock rolling down a hill given associated causality.
I wonder if you see it this way as well Mike? Or at least beyond the value observations that I’ve made?
(Note that subscribing to the comments doesn’t seem to work for me though I will check in for a while.)
I wouldn't describe him as a spiritualist. He's an agnostic. He doesn't know whether God exists or what he might be like if he does. He doesn't believe in the traditional omni-God but is open to the existence of a weaker or more fallible or less perfect godlike being.
He is definitely a panpsychist and wrote a book on the subject, Galileo's Error. He is not an idealist, if we define idealism as the position that the stuff of the world is fundamentally mental and physical stuff only supervenes on top of this, as if some ultimate mind is dreaming the physical world. Rather he believes that the intrisic nature of physical stuff is consciousness. This is a subtle difference but it is one he perceives.
Alex Popescu here. To my shame I've only recently discovered your blog for some reason. I haven't had a chance to go through many of your previous threads, but I'll definitely make sure to follow your blog for any future postings.
Anyways, apologies for the somewhat late comment on this thread. I thought it might be useful to clarify whether there even exists any meaningful distinction between weak and strong emergence, or whether, if there does exist a meaningful difference, that difference would be experimentally verifiable/falsifiable.
Consider your boiling water thought experiment. Imagine we lived in that world, call it Bw, where our (current) core theory is incapable of accurately describing the boiling mechanisms of macro-structures of water. In that world, the strongly emergent laws which govern the boiling mechanism would manifest at the particle level (because of ontological reducibility of water), what is termed 'top-down causation', all the way down to the fundamental level of quarks/strings etcetera (because water molecules are made of such stuff).
So, the sole difference between Bw and our world is that in Bw, the fundamental particles in Bw (at least fermionic matter) don't obey the core theory, they obey the core theory + new rules which add new properties. Therefore, in Bw, physics has to be explained with a new 'core theory', call it CRT2. But the physicists in Bw would argue that the laws that govern boiling water are purely explicable in a weak emergentist sense, after all they fall out of CRT2.
We can do a similar thought experiment in our world by subtracting some vital component from our own core theory, without which water cannot boil. Now imagine that there exists a world similar to ours which lacks this vital component of the core theory. Call this world, 'subtraction world' or Sw, and call their core theory of physics "core theory minus' or CRT-. From the perspective of the inhabitants of Sw, our CRT is nothing but 'CRT-' + 'strongly emergent laws' which govern boiling water mechanics.
We have thus demonstrated that the strong emergentist/weak emergentist dichotomy is purely semantic. There appears to be no meaningful difference between the SE and WE picture of the world. We couldn't even say that our core theory is incompatible with strongly emergent laws that govern free will or boiling water, since if such laws existed, they would have been 'incorporated' into our core theory to begin with.
Granted, this emergentist reductionist implies a mathematical reductionism. I am assuming that any two mathematical formalisms which separately described 'CRT-' and 'strong emergent laws' can be reduced to a unitary mathematical formalism (our CRT). But there's no reason to think that this can't be done.
I agree that there's no reason to think that all our science can't be reduced to a unitary mathematical formalism in the real world, but I guess SE is just the idea that it's logically possible that there could be a discontinuity such that you couldn't, depending on what you mean by a unitary mathematical formalism.
The way I envisage it, SE means that the microphysics obeys certain ceteris paribus laws. Only when certain high level structures are present, new strongly emergent laws override the microphysical ceteris paribus laws.
So, for the boiling example, the way electrons and protons and neutrons, and perhaps hydrogen and oxygen atoms, and perhaps even small aggregations of water molecules behave is determined entirely accurately by the core theory. But if Strong Emergence for boiling is true, as soon as you get a significant aggregation of water molecules in a liquid state, then the microphysical laws of the core theory no longer describe what is happening. The laws governing what the microphysics are doing change because of the presence of high level structure.
Thus there is no way to accurately characterise the microphysical behaviour without reference to high-level structure. This is quite different from weak emergence or any current model of scientific/physical law.
I'm saying that there's no reason to think that "strongly emergent laws overriding the microphysical ceteris paribus laws when high level structures are present" entails that there must be some discontinuity in our core theory of the microphysical, in which case the notion of strong emergence is meaningless.
Or, even if it does entail a discontinuity in our specific case, this discontinuity can still be captured under our core theory of the microphysical, in which case strong emergent laws might be meaningful, but not experimentally verifiable or falsifiable (ignoring logical positivism).
In the first case, it's worth asking whether the new laws emerge at some sharp discrete boundary, or as a continuous process. In the former example, the new laws are epistemically and ontologically relegated to certain spatiotemporal regions (where the high-level structures are present). In the latter example, the new laws are epistemically constrained to those spatiotemporal regions, but not ontologically constrained. Meaning the new laws do have an actual effect on all microphysical particles outside those regions, but such effects are simply not noticeable. We can draw an analogy in the latter example (continuity) with special relativistic effects, which in principle affects all particles in all speed regimes but is not epistemically noticeable outside of high-speed relativistic regimes.
I assume we are in agreement that any such continuous process is capturable by a fundamental theory of the microphysical. However, even if our strongly emergent laws did not ontologically affect all microphysical particles outside of some discrete spatiotemporal boundary (within the high-level structure), there's no reason to think this couldn't be captured by a unitary mathematical theory of the microphysical. For example, discrete processes like particle creation/annihilation are modeled by an underlying quantum field theory. Similarly, we can imagine some mathematical formalism of all particles, which describes different, weakly emergent, behavior in the presence of high-level structure.
Also, the idea of a discrete spatiotemporal boundary is not very plausible to begin with, since macro-physical structures like tables, brains or 'significant aggregations' of water etc. are not precisifications, but inherently vague concepts. For example, it's unlikely that your brain is just a set of a certain number of quarks, such that if we took one quark away, that would no longer be your brain. It's also hard to believe that micro-physical particles would spontaneously shift their collective behavior depending on the absence/presence of one quark/string. So, if strong emergent laws existed, they would likely be continuous to begin with.
Finally, even if it was impossible to incorporate the emergent behavior of the microphysical in the presence of high-level structure with our original microphysical particle behavior into one mathematical theory, that doesn't mean we still couldn't have a unitary core theory. For example, it might be true that general relativity is fundamentally irreconcilable with quantum field theory (i.e. quantum gravity is impossible), but we still consider both theories as part of our core theories. So even in the second non-reductionist case, we can see that there's no experimental difference between the strong emergence and weak emergence examples, it's really just a difference of semantics.
"Finally, even if it was impossible to... that doesn't mean we still couldn't have a unitary core theory. "
Oops, I meant to say that even if emergent behavior was mathematically irreducible to our theory of the microphysical, we might still consider the mathematical modeling of that emergent behavior to be a part of our "core theories". Just as we consider GR and QFT to be part of our core theories even though they are potentially irreducible to a third fundamental theory. Physicists still have faith in reducibility regardless.
1. So, even if strong emergence was real (meaning that strongly emergent phenomena were mathematically irreducible to non-SE phenomena), we wouldn't be able to tell the difference (it's unfalsifiable/unverifiable). That's because a SE world is epistemically indistinguishable from a WE world where we just happen to lack the fundamental theory that would reconcile our different theories.
2. But in any case, as I already mentioned, there's no reason to think that:
A: [strongly emergent laws overriding the microphysical ceteris paribus laws when high level structures are present]
entails that
B: [there must be some irreducibility in our core theory of the microphysical]
A is totally tangential to B. In other words, we could have true irreducibilities in our core theory which are not in virtue of higher order structure (there are just gaps in the microphysical), and we can have phenomena that is in virtue of higher order structure, but which is nonetheless mathematically describable as weakly emergent phenomena.
Ergo, we either define strong emergence as A, in which case there is no meaningful difference between strong emergence and weak emergence. Or we can define strong emergence as A & B, in which case there is a meaningful difference between SE and WE (because WE doesn't entail B), but this difference is experimentally unobservable (due to the reasoning in 1).
You can have higher-order phenomena weakly emerge from one mathematical theory of the microphysical, or a plurality of such theories which are collectively irreducible.
Just to be clear, I think that Strong Emergence almost certainly doesn't exist, and I'm sympathetic to the idea that there are serious problems with it. But I don't think it is the same concept as weak emergence, so I'm focused on pointing out how I see them as different if we can suspend our disbelief and imagine that Strong Emergence exists.
So I'm just going to help myself to all the assumptions that help to make this distinction clearer.
> it's worth asking whether the new laws emerge at some sharp discrete boundary, or as a continuous process.
Let's suppose it's a discrete boundary.
> For example, discrete processes like particle creation/annihilation are modeled by an underlying quantum field theory.
So, this shows that sometimes discrete discontinuous processes emerge from an underlying microphysical theory. It doesn't establish that this is always the case. I could write a toy model world as a simulation where some rules apply in some circumstances, and other rules apply in other circumstances, with no underlying unifying logic other than this sort of discontinuous if/else.
For example, in Minecraft, there are "microphysical" rules that describe how blocks behave and interact. Blocks can only be attached to other blocks. Sand falls straight down, rock does not, water flows, lava causes wooden blocks to burn, etc. You can build all sorts of interesting mechanisms by arranging these blocks and letting them run according to these rules. But if you place three iron blocks in a T shape and put a pumpkin on top, it suddenly becomes an Iron Golem.
Someone living in a Minecraft world might wonder if the behaviour of the Golem emerges from the microphysics of iron blocks and pumpkins, or if it is strongly emergent, completely new rules arising because of the presence of some higher level pattern. In the case of Minecraft, the answer is the latter. It seems to me that it is logically possible that strong emergence could obtain in the real world also.
> Also, the idea of a discrete spatiotemporal boundary is not very plausible to begin with, since macro-physical structures like tables, brains or 'significant aggregations' of water etc. are not precisifications, but inherently vague concepts.
I agree. This is a big problem for strongly emergent theories. But it's not clear that this rules out more precisely defined higher level patterns from having strongly emergent effects, like the Iron Golem. It's logically possible that some part of brains (microtubules!) or brain processes could be instantiating some such precise but yet unknown pattern. A significant aggregation of water, for example, might be 2000 water molecules within a volume of a few square nanometers, or more water molecules in a convex volume at least as dense. This is totally arbitrary and implausible but it's reasonably precise.
You go on to say that perhaps GE and QM really are irreconcilable. That seems to have its own problems, since there are circumstances where both apply. But that aside, if there is this kind of radical discontinuity in physics, this doesn't so much show that SE is the same as WE, rather it shows that SE is more like the real world than we supposed. We can take this emergent behaviour and add it to the core theory, but then the core theory includes strong emergence. Strong emergence is just the idea that there can be top-down causation -- high level structure impinging on low-level dynamics in a way that cannot be predicted from looking at low-level interactions alone. It's not really about what's in the core theory.
On your numbered points.
1. OK, so maybe we cannot be sure that a theory is SE. That doesn't mean that SE is the same as WE.
2. I'm not sure I follow the argument really, but your conclusion is basically just point 1 again. My point is to distinguish SE from WE as theoretical concepts. It doesn't matter to me if we can never prove empirically that some phenomena world is SE rather than WE.
"Just to be clear, I think that Strong Emergence almost certainly doesn't exist,"
Yes, I gathered as much from your article :)
Also to be clear, I am saying that the standard conceptions of SE are downright incoherent. We can modify our definition of SE to retain coherency, but then SE loses any explanatory usefulness and becomes unverifiable (and maybe unfalsifiable).
"You go on to say that perhaps GE and QM really are irreconcilable. That seems to have its own problems, since there are circumstances where both apply"
That's because they are continuous and not ontologically relegated to certain spatiotemporal domains. So, I don't think this potential irreconcilability would imply "SE is more like the real world than we supposed" since presumably your definition of SE (being discrete) is meant to be spatiotemporally relegated to certain domains (those where the higher-order strictures are present).
What this shows is that mathematical irreducibility is not sufficient to establish (your definition of) SE; we can live in a WE world which is irreducible.
"Strong emergence is just the idea that there can be top-down causation -- high level structure impinging on low-level dynamics in a way that cannot be predicted from looking at low-level interactions alone."
That's the standard conception yes, but we just showed this was incoherent. A Laplacian demon looking at the low-level interactions of the water atoms/quarks in those spatiotemporal regions where the higher order structures are present would in fact be able to perfectly predict the behavior of such particles. Once you admit that SE affects the motion of particles within the higher-order structures, you must admit that lower-level interactions within those structures will predict emergent behavior.
We can change our definition so that the standard conception is false, meaning that prediction of higher-level order based on lower-level interactions is not just a hallmark of WE, but is also manifested by SE. However, doing so means SE almost becomes meaningless/useless.
Under your idiosyncratic definition of SE:
A) Mathematical irreducibility is insufficient to establish strong emergence
B) Predictable emergent behavior based on micro-physical behavior is a necessary condition of SE (!)
C) It is true that "maybe we cannot be sure that a theory is SE. That doesn't mean that SE is the same as WE." But of course, strong emergence is meant to be experimentally verifiable under the standard conception, otherwise it wouldn't be useful.
So, what is the point of making the WE-SE distinction then? it has become completely divorced from the standard framework; we have basically sacrificed everything important that was meant to be originally captured under the 'strongly emergent' concept.
To be clear, the potential irreducibility of GM and QFT is explainable as being about different properties (gravity vs other stuff) of particles across all spatiotemporal space. Meaning there is no theory that can reconcile the different properties of these particles.
Whereas in your case of SE, it seems like you want irreducibility to be specifically focused on certain spatiotemporal regions.
"A Laplacian demon looking at the low-level interactions of the water atoms/quarks... would in fact be able to perfectly predict the behavior of such particles"
*would in fact be able to perfectly predict the emergent behavior of the higher-level structures*
I was going to make another correction, but I actually think this was appropriate. You see, one of life's biggest mysteries is explaining how General Motors can consistently make such crappy quality functioning cars without violating the laws of physics....
I'm not particularly interested in the verifiability or falsifiability of SE for present purposes. I'm not sure I understand or buy your argument that it is unverifiable/unfalsifiable, but even if I did I see this as beside the point. The usefulness of SE as a concept resides in explanations of consciousness and free will. The proponents of SE seem to think that it is necessary to explain these phenomena for philosophical reasons. My point in this blog post was to grant the coherence and hypothetical existence of SE and argue that it doesn't really help with these explanations after all even so.
> What this shows is that mathematical irreducibility is not sufficient to establish (your definition of) SE; we can live in a WE world which is irreducible.
I don't think you've shown this. I'm assuming that SE is a sharp discontinuity because this makes the distinction from WE clearest. But I don't grant that a continuous gradation from QFT regions to GR regions is WE. It might well count as something like SE, just not like the clearer cases I wanted to focus on just to establish that the idea makes sense.
In truth, I think this sort of thing is not really a case of either WE or SE. It's more like Nancy Cartwright's model of physics being a patchwork of different laws in different domains. SE is like a special case of this sort of thing, rather than this being an example of SE.
> A Laplacian demon looking at the low-level interactions of the water atoms/quarks in those spatiotemporal regions where the higher order structures are present would in fact be able to perfectly predict the behavior of such particles.
The Laplacian daemon is supposed to know the laws of physics. But what you're saying is ambiguous, because I'm not sure if the daemon is supposed to know only the low level microphysics or if it is also supposed to know the rules about strong emergence.
If the LD only knows microphysics, then it would not be able to predict the behaviour of those particles, because their behaviour is being influenced by the existence of high-level structure the daemon doesn't understand.
If on the other hand the daemon understands the SE laws, then the daemon can only predict the behaviour of the particles successfully by noting the presence of high-level structure and applying the modified physics. This contrasts with WE where the daemon does not need to note high-level structure.
One reply and then I have to go off to bed, sorry!
"I don't think you've shown this (GR/QFT discontinuity being WE)"
The GR/QFT discontinuity example is definitely a case of WE. The point is that GR and QFT still collectively account for all microphysical behavior, so in knowing them one would be able to predict all emergent behavior even if GR and QFT were collectively irreducible. Irreducibility in no way affects our predictive power, it just affects how complicated our math is.
"I'm assuming that SE is a sharp discontinuity because this makes the distinction from WE clearest...It might well count as something like SE"
If there is no spatiotemporal boundary (as in the discrete case) then SE laws must affect all particles in all spatiotemporal regions (the entire universe).
Here I'm not sure if you're claiming that:
1. All microphysical particles in the entire universe will be affected by the presence of SE laws due to emergent higher structure, or
2. Just those particles within the bounded spatiotemporal regions.
If 1, then the example of the boiling water means that as soon as sufficient numbers of water particles formed in the universe somewhere, then all particles everywhere would have become affected by this (to varying degrees of course).
About the Laplacian demon: "I'm not sure if the daemon is supposed to know only the low level microphysics or if it is also supposed to know the rules about strong emergence."
The Laplacian demon only observes and knows the microphysical behavior.
"If the LD only knows microphysics, then it would not be able to predict the behaviour of those particles"
By definition, to know the microphysics is to know the behavior of the particles! Perhaps what you meant to say is that the LD could not know the microphysics just from observing the microphysical behavior, since she would need to know the SE laws. But the point is that the SE phenomena are lawlike and they govern the microphysical, therefore the microphysical follow lawlike behavior, and any LD solely observing the microphysical (and ignoring higher order structure) will be able to predict the behavior of those particles. The only way to deny this is to deny that the SE behavior is lawlike.
Addendum: If 1 is true, then by definition, the GR/QFT discontinuity could be a case of SE, but if it were a SE phenomenon, it would also have to be a WE phenomenon. Because by knowing GR & QFT, one could fully predict higher level behavior (assuming there were no laws in addition to that).
So, if you want to say that WE and SE are mutually exclusive, you have to stick to sharp boundary cases like in 2.
> The GR/QFT discontinuity example is definitely a case of WE.
It's not clear to me that it's a case of emergence at all. Emergence is about high level descriptions and low level descriptions being different. But for GR/QFT, we have different "peer" descriptions, i.e. one isn't higher level than the other.
> If there is no spatiotemporal boundary (as in the discrete case) then SE laws must affect all particles in all spatiotemporal regions (the entire universe).
The boundary could be fuzzy or quickly drop off to negligible effects. But let's assume we're working from an idea of a sharp boundary to see if that idea makes sense *without prejucide* for the question of whether there could be SE without sharp boundaries or whether the absence of sharp boundaries means WE.
So this means we're considering case 2 -- just those particles within the bounded spatiotemporal regions.
> By definition, to know the microphysics is to know the behavior of the particles!
By microphysics, I mean the ceteris paribus behaviour of the physical particles in small numbers where there is no relevant large scale structure affecting them in a SE sense.
You seem to want the LD to know how the particles behave in all circumstances. But if SE obtains, this means that the particles behave differently depending on what macrostates obtain in various spatiotemporal regions. So to predict the behaviour of the particles, the LD must know what macrostates obtain in their spatiotemporal regions. This is quite different from what the LD needs to know if there is no SE. In the latter case, the LD needs to know nothing about macrostates. It can predict the behaviour of particles from local conditions only.
"It's not clear to me that it's a case of emergence at all."
The emergence wasn't supposed to be 'between' GR and QFT. The idea is that GR and QFT are still our core theories of the microphysical, but that they can be irreducible and still describe higher order structure behavior. This is weak emergence. The point is just to show that weak emergence can still happen even if there are mathematical discontinuities/irreducibilities in the behavior of the microphysical.
About the LD:
"that the particles behave differently depending on what macrostates obtain in various spatiotemporal regions" doesn't entail that "to predict the behavior of the particles, the LD must know what macrostates obtain in their spatiotemporal regions"
As an example, particles behave differently in black holes and stars when compared to empty space, but the LD doesn't need to 'know' the macrostructure presence of black holes, stars etc...
The only difference in the above example is that the different particle behavior is presumably captured by one mathematical formalism (the fundamental law that underpins QFT/GR). But now suppose that QFT and GR are irreducible, it follows that the LD can't predict the different behaviors with just one theory, it needs to know both theories. But this doesn't in any way entail that the LD can't know the behavior of the microphysical. The QFT and GR scenario are just examples, you can substitute or add "SE laws" to one of the above, and you get the same results. It doesn't matter what the theory is, so long as the behavior is lawlike.
So clearly, it's possible for the LD to predict higher order behavior from knowing the microphysics in case 1. As for case 2 (the discrete boundary case), there's no reason to think a boundary case would make any difference; both theories would still collectively describe the microphysical behavior.
*Just to hammer the point home in case you want to deny that we can substitute SE laws for GR or something. Take our irreducible scenario of GR and QFT, and now assume that GR exists in virtue of higher order structure. In other words, the GR laws only obtain in virtue of some gravitational structure like black holes existing in our universe; if black holes ceased to exist, then the GR laws in our entire universe would go with them. By definition, the GR law has become an example of a case 1 strong emergence phenomena.
But nothing has changed in the LD scenario, the LD still knows GR and QFT (they are part of our core theories), and collectively GR and QFT can predict the macrostate behavior.
I think we need to focus on the daemon in the context of SE because this is the key point to me. I don't see the talk of QFT and GR as leading anywhere as it's not clear to me what is supposed to emerge from what here, and they seem not to be the sharp discontinuity I want to talk about. Also, you seem to me to take too much for granted about how the LD can apply QFT and GR and make predictions successfully. We don't know how to reconcile these two, so it's not clear how this would work. So I just don't want to talk about QFT and GR any more in this context as for me at least it's leading to more confusion than clarification.
So, suppose we have theory A by default, in empty or ordinary space say, and theory B within very special spatiotemporal regions of high level structures giving rise to SE.
Let's suppose also we're only concerned with case 2. Let's suppose for the sake of argument that the presence of microtubules can be precisely defined and that the presence of microtubules changes the laws of physics within those microtubules such that theory B applies within.
In order for the LD to make predictions in all situations, the LD needs to know both theory A and theory B and also needs to know where the microtubules are. The LD can perhaps derive from low-level descriptions where the microtubules are without being told explicitly, but the LD still needs to take this step -- looking at the low level description and seeking the microtubules within it.
The point is not that the LD cannot make predictions. The point is that the LD needs to first parse the low level description to try to find high-level structures before proceeding to apply theory A or B as appropriate. This is no problem really, it's just a marked difference from WE, where the LD needs to know nothing about high level structure and can make predictions successfully by applying one theory to local interactions.
You say:
> As an example, particles behave differently in black holes and stars when compared to empty space, but the LD doesn't need to 'know' the macrostructure presence of black holes, stars etc...
But this seems plainly wrong to me. If you are talking about strong emergence, and physics works differently inside black holes, then the LD needs to know if it is inside a black hole before proceeding to apply the appropriate theory in order to make predictions.
Let's assume that theories A and B are not reducible to some fundamental theory C, so we are in agreement that LD needs to know theories A and B to make microphysical predictions. I will define "knowing the microphysics" as knowing theories A and B. The question is, can the LD know the microphysics without knowing that there exists higher order structure?
Well, it depends on what you mean by 'knowing' higher order structure! Typically, weak emergence is defined as the ability to fully and accurately describe a system without reference to higher level structure. For example, in our own universe without SE (we assume), a LD can accurately describe the physics inside a star, like nuclear fusion, without mentioning the concept of fusion. Instead, the LD just knows/describes things like the temperature and density configurations of the particles inside the star, and the end description based on these microphysical properties is the same.
Now take your microtubule example (case 2) of strong emergence. The strongly emergent laws which apply to the microtubules must, because of ontological reductionism, also apply to the microstates within the microtubules. Hence, we can simply translate talk of:
"If microtubule A is in position x, then the microtubule modifies property y"
to
"If the collective particles in set {a,b,c...} are in collective positions {x1,x2,x3...} then the particles will modify their micro-properties of {y1,y2,y3...}"
This is simply a consequence of ontological reductionism. For any description of the higher order structure/microtubules which satisfies the strongly emergent laws, the LD will be able to give a different description of the microstates which also satisfies them. By definition, that is weak emergence.
Weak emergence is simply the notion that low-level descriptions are sufficient to capture all behavior, fundamental and macro. That's the standard conception. Also in the standard conception, strong emergence is typically defined as the denial of a supervenience relation between the higher order structures and their constituent bases. In other words, according to SE, the higher order structure behavior does not supervene on the constituent base behavior.
But you've just admitted that the appropriate micro-level descriptions 'just are' descriptions of the higher-level structures, so you are not denying the supervenience relation.
As I said in the beginning, you can deny the standard conception of strong emergence, but this comes at the cost of your idiosyncratic definition just becoming a subset type of weak emergence.
Typically, in the standard conception, strong emergentists will try to appeal to the fact that all lower-level descriptions must fundamentally underdetermine higher level descriptions, because of vagueness. But either such vagueness is epistemic, in which case there is no underdetermination in reality, or it is ontological (whatever that means), in which case you can't hold to ontological reductionism (as many also want to do). So, I'm just saying that the standard conception is incoherent.
By contrast, I think Chalmer's conception of SE, which doesn't require 'top-down' causation, is much more coherent.
> Weak emergence is simply the notion that low-level descriptions are sufficient to capture all behavior, fundamental and macro.
OK, sort of, on a certain interpretation. It depends what you think counts as a low-level desription. Once your theory depends on the existence of macroscopic features, whether or not described in terms of microphysical features, then this is no longer weak emergence. Your combined theory (A+B) in fact requires you to treat these high-level features as first-rate entities, just because they change which rules (A or B) apply. Again, this is nothing like weak emergence, which is entirely blind to high level features.
I don't think this is just my idiosyncratic interpretation. I think this is me clarifying a point you find confusing from other descriptions of strong emergence.
In particular, Chalmers does describe a version of strong emergence which involves top down causation (as well as one which doesn't) in this paper: http://consc.net/papers/emergence.pdf
Thanks for the paper link. I’m familiar with Chalmer’s conception of downwards causation emergence, it’s actually what I originally had in mind when I was criticizing strong emergence from the beginning. :)
I think we unfortunately got a little bit sidetracked after I dropped my GR/QFT points. We kind of ended up migrating into a debate about whether the standard conception of strong emergence is coherent, until you brought us back to where we started by bringing up Chalmers. I think if we want to tackle this second variant of SE (the downwards causation kind) which Chalmers proposes, we will have to circle back to my original points. I will endeavour to make more sense this time around though. Apologies for the long posts ahead.
To start, I should say that when I refer to your 'idiosyncratic' definition, I'm just referring to the one we established that you were assuming (e.g. the discrete boundary case with the microtubule example). I would also describe the standard conception of strong emergence to be the view that low level descriptions of facts don't fully describe the system behavior. This is the sense in which Chalmers refers to consciousness since low-level microphysical facts do not describe consciousness (according to Chalmers). I think this standard conception is coherent on epiphenomenalism (no "downwards causation") but not really so on interactionism.
We can contrast this with the more 'intermediate' kind of strong emergence which utilizes downwards causation (what I was originally attacking), and which Chalmers describes in the paper you linked. Let’s define intermediate strong emergence like so:
ISE: The microphysical facts fully determine the system behavior, but the microphysical laws are insufficient to determine the macro-state and system behavior.
To elaborate, we can distinguish between two kinds of ISE.
1. Our current known microphysical laws are insufficient to determine macro-state system behavior
2. No possible microphysical law(s) would be sufficient to determine macro-state system behavior.
1 is of course conceivable, but also ends up being a purely epistemic distinction. If ISE just means 1, then what we call strong emergence is just our epistemic limitation. In reality, the right kinds of microphysical laws could exist, but we just might not know it. So, 1 is an ontologically meaningless distinction.
The idea behind ISE2 is that the microphysical laws which are solely about microphysical properties won’t be able to describe strongly emergent processes. The problem with this is that it blurs 1 and 2. What constitutes a microphysical property is determined by our microphysical theories!
For example, electrons having spin is a consequence of quantum theory. So, I think Chalmers is inherently conceiving of ISE as 2, but then is thinking of microphysical properties through the lens of our core theory. However, there’s no reason to think that we have to use our core theory to determine microphysical laws (unless we define ISE using version 1).
Nothing stops us from adding new properties to the microphysical, which collectively can account for the discrete strongly emergent behavior. Just as a hypothetical example, imagine my coffee cup spontaneously vanishes when it reaches a certain volume threshold. This strongly emergent behavior can be fully accounted for by adding a hypothetical property to micro-physical atoms. Just add a mathematical vanishing operator that every quark has.
This vanishing operator is hypothetical and doesn’t manifest unless it crosses a certain threshold. The vanishing operator outputs zero for every degree of freedom of the quark past this threshold (meaning the quark vanishes) and makes no impact below it. We can also tie the operator’s function in such a way that it only reaches the threshold if the quark happens to be in the coffee cup at the right certain point (when the coffee cup actually vanishes).
Can we do this for any hypothetical strongly emergent law? Yes. A simple consequence of the mathematical underdetermination of behavior is that any description of macro-physical behavior should be infinitely underdetermined. I should always be able to come up with a mathematical theory of the microphysical which fully accounts for any macro-level lawlike behavior (granted, it might be absurdly complex). Thus, any strongly emergent law can also be fully describable using weakly emergent microphysical laws.
So ISE2 doesn’t work. Is there any way to perhaps save it? Well, earlier I proposed that we might add an irreducibility criterion to our SE concept (this is when I brought up the GR/QFT stuff). Why do this? Because we might protest that the microphysical substitute laws (e.g. vanishing operator) serve no purpose except to explain the weird SE behavior.
So, we might draw a distinction between weak emergence and strong emergence by pointing out that in strong emergence, there is an irreducible mathematical component theory added to the core theory, whose only purpose is to explain the strongly emergent behavior.
Indeed, there is no reason to think that we can mathematically reduce our SE-substitute microphysical laws with our core theory (meaning that there is a third and simpler theory which subsumes both). So, maybe we can just add this irreducibility criterion to ISE2. Meaning if (no possible reducible microphysical law(s) are sufficient to determine system behavior), then SE is present.
The problem is the GR and QFT examples, which I earlier brought up. The idea is that we can imagine that in our WE universe, GR and QFT happen to be mathematically irreducible. Meaning that there is no third theory which can subsume GR and QFT. But according to our ISE2 + irreducibility criterion, this now means that the GR/QFT case counts as SE.
That’s because the microphysical facts still fully determine the system behavior, but now there’s no irreducible microphysical law(s) which governs the system behavior, so by definition GR and QFT seems to be a case of SE. But then we would have to acknowledge that SE is derivable from our core theory of physics, which undermines the whole point.
So that’s where we last left off, before I dropped the GR and QFT points, and we kind of ended up in a little detour about what we think the appropriate semantic account of strong emergence is. I think the lesson here is that the straightforward conception of ISE2 is incomplete/incoherent. I think Chalmers simply failed to account for the fact that our microphysical properties are relative to our laws. It seems like he was implicitly using a ISE1 conception to formulate a ISE2 definition.
That said, it is certainly possible to make ISE2 coherent. We can try adding on more ad hoc criteria, for example we can add in the clause (irreducible + irreducible component only explains strongly emergent behavior). But then we will need an account of what counts as strongly emergent behavior (which we were trying to solve in the first place!). We will have to add in more ad-hoc criteria to solve this demarcation issue. Such as, only behavior that is solely relegated to sharp spatial-temporal boundaries inside structures that we identify as higher-order structures gets to count etc.
However, this now comes off as a bit arbitrary since what we identify as higher-order structures is subjective to our brain categorization schema. In any case, I concede that it could potentially be done, but we’ll just end up with a long list of ad-hoc criteria, and our definition will be rather useless in the end.
The whole point behind strong emergence is to identify some possible objective phenomena in the universe which makes a clear difference, but we’ve just ended up with a totally ad-hoc account which seems anything but a natural kind.
I hope this helps clarify my earlier points and gets my view across more clearly.
Thanks, I think I'm closer to understanding where you're coming from.
I still think you're failing to really justify the point that my idiosyncratic example of SE (microtubules or your coffee cup) is actually WE.
> We can also tie the operator’s function in such a way that it only reaches the threshold if the quark happens to be in the coffee cup at the right certain point
This is what makes it strongly emergent. The threshold is only reached if the quark is in the coffee cup, which means that the coffee cup, a macroscopic object made of many many quarks, is a first-rate, indispensable entity in your physical theory. Your theory cannot be expressed without describing what a coffee cup is, whether you call it a coffee cup in so many words or just describe the detailed relationships of all the particles that make a coffee cup to each other without labelling them as a "coffee cup".
So how about this for a definition: SE is when the irreducible entities a complete theory must describe include both fundamental elements and particular patterns of those elements where special rules apply.
I think this definition gets around your suggestion that SE is always only relevant to some particular model of microphysics.
This definition bears little relation to what strong-emergentists talk about, but I hope it's clear that I'm just trying to "robustify" what they are vaguely hinting at. I think what I'm describing is the same thing they are, but I haven't seen any detailed analysis of how it is supposed to work so I can't be sure.
A nice example of this might be the moves of chess. The ceteris paribus microphysical laws of chess describe how individual particles (pieces) may move in most cases when special rules brought about by certain patterns don't apply, and they can all be explained with reference to one piece at a time. But certain strongly emergent laws require particular patterns of multiple pieces. Castling is one, pawns capturing diagonally/en passant are examples of additional SE possibilities. The illegality of placing oneself in check is an example of SE removing possibilities.
Since we're just discussing whether the idea of SE is coherent and not its applicability to the real world, I would encourage you to engage more with toy examples having simple discrete models like chess, Minecraft or perhaps something based on Conway's Game of Life.
Of your proposed interpretations of this discussion, I don't think we're discussing either 1 or 2, since neither of us believes in SE. We're just discussing whether the concept is coherent or different from WE. I think SE proponents would go for 2, though. They are motivated by a belief that only a more holistic non-reductionist approach could account for whatever it is they think they need to explain. This is why I think they would endorse the idea that SE requires both microphysical entities (e.g. quarks and electrons, or strings, or quantum fields, or what have you) as well as higher level patterns of those microphysical entities giving rise to "holistic" behaviour.
That's great. I'm glad we've made a little bit of progress. If I understand your objection, it's that no microphysical lawlike model will be sufficient to describe the system, because there will still have to be a SE lawlike description somewhere in the microphysical model. We can try to 'sneak in' the SE law by describing it in microphysical terms, but there still has to be a radical discontinuity of scale at some point in the lawlike description.
For example, the law that "if (describe coffee cup in microphysical terms) exists and if (describe volume threshold in microphysical terms) is realized, then (describe the right kind of microphysical changes)" is still radically discontinuous with the other microphysical laws that the LD knows.
I think however that this is false. In the coffee example, there is no such substitutable SE lawlike description that the LD uses. When I said that we can describe the vanishing function "(in) such a way that it only reaches the threshold if the quark happens to be in the coffee cup at the right certain point" I just mean that as shorthand for the various types of microphysical laws that would stand in for the SE law. But crucially, these microphysical laws are not identical stand-ins or equivalent substitutes for the SE law.
This is way too complex to show in the coffee example (hence why I used the shorthand phrase in the first place), so maybe it's best to go to a toy model example as you say.
Okay, so imagine in our toy model universe we have one kind of atom element type in a three-dimensional space. There is only one kind of microphysical law (at first) in this universe, the law of gravity. In our universe we also have a ball made of this matter, which obeys the simple gravity rule. Now imagine that we introduce a new SE law, that at a certain amount of mass our ball inexplicably vanishes.
We can now add our vanishing operator to the atoms, just like before. When this vanishing operator crosses a certain threshold, the ball will vanish. How do we achieve this? We just make the atom's vanishing threshold correlate with a certain amount of force. When the atom experiences this value of force, it will vanish, according to the rules of the vanishing function. It just so happens, that this force will be experienced (in our simple universe) iff the gravity threshold is reached.
Note that there is no law that the LD has to know which is microphysically substitutable one-to-one with the SE law. For the LD, which just keeps track of the quark's properties (including the hypothetical vanishing property), the entire vanishing process is continuously described. There is no discontinuous gap in the descriptions of the microphysical laws.
It's much harder to imagine this with the coffee example, because there are so many microphysical properties to take into account when we are attempting to describe a coffee cup. Clearly however our physical coffee 'type' has to have a uniquely describable set of properties in purely microphysical terms. For example, the quarks in my coffee cup each have a unique spatiotemporal location & worldline. So long as our vanishing operator's threshold was set to activate when the right kind of conjunction of microphysical states was reached (unique to the quarks in my coffee cup), we are set. Moreover, this lawlike process is solely continuous from the LD's point of view. The threshold activation is a discrete process, but the value of the vanishing operator is continuously describable. So, there is no radical discontinuity of scale that would mark a clear-cut case of SE.
The only difference is that this super complex microphysical function is utilized solely to describe the SE process. Whereas our current microphysical properties have multivariant functions. An electron's charge, for example, is not just hypothesized to describe what happens in a cup of coffee. Yet this just goes back to my irreducible criterion.
The bottom line is that it seems like SE is relative to one's microphysical model. And what we describe as SE is really just a subset of certain microphysical models (those that meet the ad hoc criteria of being irreducible & relegated to certain discrete spatiotemporal boundaries, among other things).
Furthermore because of mathematical underdetermination and the fact that any macro-scale object has unique microphysical features, it will always be possible to come up with a mathematical model that can fully describe the entire system, just by tracking those relevant microphysical features. Of course, such a model would be insanely complex, and reducibility is in no way guaranteed.
Oops, I should probably qualify my example by noting that the atoms/ball are point-like. No Pauli's exclusion principle or anything like that. All the atoms are in the same spot and should therefore experience the same gravitational force. Otherwise, the analogy won't work!
On your toy example, if there is really only gravity then there will be no balls. Assuming the particles have mass and attract each other gravitationally, they will simply fall into and orbit around each other forever. Same if you try to make a ball, the particles will just fall through each other and orbit around their center of mass. I don't think they'll even spiral into each other to form singularities if there are no other forces to dissipate energy. Maybe we could say that such an aggregation will disappear if it gets massive enough. This might look like SE but perhaps this is only because particles disappear when local spacetime curvature is sufficiently extreme. If so, I agree that this feels like WE. But this just means that it's not a very good example of SE, because we can define all the rules without reference to high level patterns.
I guess you have a harder case to prove, because to make your point, you have to show not only that all particular examples I can come up with could be accounted for by WE, you have to show that there can be no example that can be accounted for by WE. I don't know how you could demonstrate that. It at least seems conceivable to me that there are models which couldn't be given a WE-style reduction. The hypothesis that this could be the case is just the hypothesis that SE is coherent.
Take a toy model based on Minecraft instead. Everything I'm going to describe is true of Minecraft, but I'll leave out a lot of irrelevant detail. So let's say it is a world made of 3D cubic blocks, but also containing non-block agents. There are different types of blocks, but for our purposes most of these differences don't matter. Agents can move and interact with blocks (destroy, create, move, transmute) according to some algorithm but are not themselves composed of blocks.
So far I have described a sketch of the microphysics of this world. The SE law I want to introduce is that when certain types of blocks are arranged in a certain pattern, they disappear and are replaced by a certain type of agent.
We can just stipulate in the toy example that the initial conditions of the ball are such that all the atoms would collapse into a point like space. We can also imagine a discrete spatiotemporal universe, so we don't get absurd math leading to singularities/infinities.
Anyways, you are right that I will have to show that every example of possible SE in our universe should be capable of being modeled in microphysical terms. I think the key point I earlier made to buttress this point was that every macro-physical object will have unique microphysical identifiers.
The reason the (correctly updated) toy model example works, is because it just so happens that that every atom in that universe experiences a unique microphysical feature (i.e. a certain amount of force) iff the strongly emergent law is in play. Well we can also see that this must be true in my coffee example (and for any example), so long as we hold to ontological reductionism. Every quark in my cup of coffee has unique microphysical properties (for example a unique location) at the time of its vanishing, that is not replicated by any other quark in the universe.
In the coffee example, the vanishing threshold would be reached iff any quark meets certain criteria corresponding to a large disjunction of possible microphysical features. Where each feature tracks a set of microphysical properties that is unique to a quark in my cup of coffee at the exact time of vanishing. The set of disjunctive features picks out all the possible states of all the quarks that exist in my cup of coffee at the exact moment of vanishing. And it's disjunctive because no quark could be in more than one hypothetical state.
Because no other quarks in the universe meet those exact properties, it would follow that if any quark happened to meet any feature (where each feature is a complex set of micro-properties) picked out from this disjunctive set, then it must be in the coffee cup at the exact time of vanishing.
Also, as I earlier mentioned, this process is continuous. As a particular quark in the universe happens to get closer and closer to matching some of those properties, the operator value will increase, but it won't reach the actual threshold value until the exact moment of the vanishing process. Granted, it would be super difficult to construct this model because we have a large disjunctive set, and each possible feature is different (so it's not obvious what should count as "approaching the threshold value"), but in theory there should be a function which would be able to assign appropriate weights to each value.
Notice this will also be true for any macro-physical SE phenomena. As long as that phenomenon is expressed in microphysical terms, and as long as those microphysical features are unique (and not multiply realizable by different quarks/atoms not experiencing the phenomena), then we will be able to construct a model, purely focused on those unique microphysical features, which accurately replicates the SE feature.
In the coffee example, the model replicates the moment where my specific coffee cup vanishes, but we can also generalize this to a macro-physical rule that all coffee cups vanish past a certain volume threshold etc...
And all of this simply follows from the fact that every macroscale phenomenon will have unique microphysical identifiers. We can also get into your Minecraft example if you wish, but I see no need since the important idea is to recognize the general rule. Not sure if this post made much sense, let me know if you found any point confusing.
> Because no other quarks in the universe meet those exact properties, it would follow that if any quark happened to meet any feature (where each feature is a complex set of micro-properties) picked out from this disjunctive set, then it must be in the coffee cup at the exact time of vanishing.
I don't think I have a clear picture of how this would work or what sorts of features you are talking about. If all these microphysical properties are basically serving to "index" each quark, then it seems to me that you might be more or less "scripting" the disappearance, by doing something akin to saying "these specific quarks disappear at time t" and "these specific quarks disappear at time u", etc. But if so, this isn't really lawlike. It doesn't help you to make predictions about hypothetical or counterfactual situations. It's just a list of all the disappearances that have been observed to happen to date, with no explanation of how or why or when we should expect to see other disappearances.
I really would prefer if you'd engage with the Minecraft example, as it seems pretty clear to me, whereas I'm lost with how your coffee cup example is supposed to work.
Hi DM,
ReplyDeleteI have to admit I haven't watched that discussion. (The videos are just too long. I do sometimes listen to them, when they eventually come out on audio.)
But I agree with just about everything you write here. I'm a bit more solidly in the compatibilist camp, but definitely think libertarian free will doesn't exist. Honestly, I'm not sure it's even coherent.
As you note, to exist, it can't operate according to any discoverable rules or principles. But that seems to imply something that would be completely random. (Otherwise we'd be able to discover some regularities in it, and hence rules.) Similar to trying to situate free will in quantum indeterminacy, it doesn't seem to provide anything praiseworthy or blameworthy.
Thanks Mike,
DeleteI'm on leave now after moving back to the UK, so I have a bit more time on my hands to write posts and watch long videos! Back to work on Friday though, unfortunately.
We're probably on the same page, but this doesn't seem quite right to me.
"As you note, to exist, it can't operate according to any discoverable rules or principles"
I would say that to exist, and especially to do so naturalistically, it *must* operate according to discoverable rules or principles.
However to meet the goals of libertarianism, namely to free us from mechanistic explanations, then it seems it *can't* operate according to discoverable rules or principles. I think libertarians think they can get away with rules or principles expressed in high-level natural language, with talk of reasons and desires and so on, but I think these are too vague/ambiguous to be the basis of any fundamental laws. Reasons and desires must supervene on something more mechanistic in a naturalistic universe.
Thanks DM. I agree. I must have just worded something awkwardly.
DeleteWelcome back to the northern hemisphere!
I’ve got to agree with Mike that the discussions that Goff and Frankish have are way too long, or at least for me. I suppose this is largely because I’m not sympathetic to idealism or illusionism. I awoke at 3am however and figured that listening in would either put me back to sleep or maybe I’d learn a bit more about why they’re so popular. Sleep never came however so I then listened to this post and comments on Speechify.
ReplyDeleteOne explanation for their popularity might be that each of them seem quite charming and knowledgeable. Furthermore at one point it was implied that Goff’s idealism is founded in theism, which is of course popular given the prospect for an afterlife Heaven. Then beyond the charm and credentials of Frankish, I suppose illusionism is reasonably popular too.
Regarding libertarian free will, I wouldn’t actually say that it’s incoherent. More to DM’s point I’d say that it’s quite coherent under the metaphysics of supernaturalism. Conversely I’m what might be referred to as a “hard naturalist”. This is to say I believe that all events, and even quantum events, are ontologically predetermined to occur exactly as they do. It’s just that we often remain too ignorant to grasp what’s going on, and certainly in the case of QM. I was happy to hear recently that Mike sees things this way too.
Given that position one might say, “Ah, so that means you can’t coherently believe in free will”. Well actually I can coherently do so, though it’s epistemic rather than ontological. Here’s the key point to grasp — I consider free will to exist as a function of the scope of one’s perspective. Thus the more that is understood about a given person’s situation, the less freedom that should be seen. The only reason that we see and even feel free will should be given our vast ignorance about what motivates ourselves and others. It always surprises me that various respected people in academia seem not to profess this specific position.
Hi Eric,
DeleteThey don't discuss illusionism or panpsychism that much on Mind Chat as they're more interested in exploring their guest's position.
Goff isn't quite a theist but more of an agnostic. He's increasingly sympathetic to some sort of intermediate theistic position: the possibility that there is some godlike entity out there but not the omnipotent, omniscient, omnibenevolent God of the usual monotheistic relligions. He will be writing about this idea in his next book.
If there is a difference between your take on free will and standard compatiblism then it seems pretty subtle! I guess it's this: that Dennett would say someone (Bob) has free will even if he (Dennett) understands Bob's situation completely, whereas you seem to be saying Dennett should say Bob has no free will if Dennett understands Bob's situation completely.
I'd agree with Dennett here. I agree with you (and so does Dennett), that the illusion of libertarian free will depends on incomplete information. But if someone has compatibilist free will then they have compatibilist free will no matter how much we know about them. They shouldn't suddenly become absolved of responsibility for their actions just because we can understand the processes in their brain that led to those actions coming about.
So Goff is more of a spiritualist or whatever than a theist? I suppose that’s more fitting in academia. And I see that you referred to him as a panpsychist when I was under the impression he was an idealist. I also see from a very brief look at Wikipedia that they imply he’s both. That’s fine.
ReplyDeleteIt sounds like you’ve got my hard naturalism position straight DM, but let’s check into this further. From here everything that ever has or ever will happen may be considered predestined to occur exactly as it does, and this even references quantum and/or human function. So ultimately there is no freewill here and thus no good/evil behavior. Instead behavior may ontologically be considered good/bad (rather than good/evil) in respect to a given set of sentient interests. This would be a value dynamic, or all I consider to be valuable to anything, anywhere. And I wouldn’t quite say that Dennett or any human could perfectly comprehend human behavior simply by means of perfect information, but rather some sort of omniscient god might do so given the underlying causal dynamics (not that I believe in gods, but for illustration purposes).
When we conceptually leap from ontological existence to epistemic understandings however, that’s where I think blame and thus good/evil apply. Even though it may have been predestined that Hitler was going to do what he did (or any other “evil” person), from our relatively small perspectives we may still effectively blame him for his deeds, and even if we presume that he ultimately had no choice. The more that we understand about what caused him to do what he did, theoretically the less blame that we should see. Then from a perfect perspective Hitler may be considered no more blameworthy than a rock rolling down a hill given associated causality.
I wonder if you see it this way as well Mike? Or at least beyond the value observations that I’ve made?
(Note that subscribing to the comments doesn’t seem to work for me though I will check in for a while.)
I wouldn't describe him as a spiritualist. He's an agnostic. He doesn't know whether God exists or what he might be like if he does. He doesn't believe in the traditional omni-God but is open to the existence of a weaker or more fallible or less perfect godlike being.
DeleteHe is definitely a panpsychist and wrote a book on the subject, Galileo's Error. He is not an idealist, if we define idealism as the position that the stuff of the world is fundamentally mental and physical stuff only supervenes on top of this, as if some ultimate mind is dreaming the physical world. Rather he believes that the intrisic nature of physical stuff is consciousness. This is a subtle difference but it is one he perceives.
Hey DM,
ReplyDeleteAlex Popescu here. To my shame I've only recently discovered your blog for some reason. I haven't had a chance to go through many of your previous threads, but I'll definitely make sure to follow your blog for any future postings.
Anyways, apologies for the somewhat late comment on this thread. I thought it might be useful to clarify whether there even exists any meaningful distinction between weak and strong emergence, or whether, if there does exist a meaningful difference, that difference would be experimentally verifiable/falsifiable.
Consider your boiling water thought experiment. Imagine we lived in that world, call it Bw, where our (current) core theory is incapable of accurately describing the boiling mechanisms of macro-structures of water. In that world, the strongly emergent laws which govern the boiling mechanism would manifest at the particle level (because of ontological reducibility of water), what is termed 'top-down causation', all the way down to the fundamental level of quarks/strings etcetera (because water molecules are made of such stuff).
So, the sole difference between Bw and our world is that in Bw, the fundamental particles in Bw (at least fermionic matter) don't obey the core theory, they obey the core theory + new rules which add new properties. Therefore, in Bw, physics has to be explained with a new 'core theory', call it CRT2. But the physicists in Bw would argue that the laws that govern boiling water are purely explicable in a weak emergentist sense, after all they fall out of CRT2.
We can do a similar thought experiment in our world by subtracting some vital component from our own core theory, without which water cannot boil. Now imagine that there exists a world similar to ours which lacks this vital component of the core theory. Call this world, 'subtraction world' or Sw, and call their core theory of physics "core theory minus' or CRT-. From the perspective of the inhabitants of Sw, our CRT is nothing but 'CRT-' + 'strongly emergent laws' which govern boiling water mechanics.
We have thus demonstrated that the strong emergentist/weak emergentist dichotomy is purely semantic. There appears to be no meaningful difference between the SE and WE picture of the world. We couldn't even say that our core theory is incompatible with strongly emergent laws that govern free will or boiling water, since if such laws existed, they would have been 'incorporated' into our core theory to begin with.
Granted, this emergentist reductionist implies a mathematical reductionism. I am assuming that any two mathematical formalisms which separately described 'CRT-' and 'strong emergent laws' can be reduced to a unitary mathematical formalism (our CRT). But there's no reason to think that this can't be done.
Hi Alex,
DeleteI agree that there's no reason to think that all our science can't be reduced to a unitary mathematical formalism in the real world, but I guess SE is just the idea that it's logically possible that there could be a discontinuity such that you couldn't, depending on what you mean by a unitary mathematical formalism.
The way I envisage it, SE means that the microphysics obeys certain ceteris paribus laws. Only when certain high level structures are present, new strongly emergent laws override the microphysical ceteris paribus laws.
So, for the boiling example, the way electrons and protons and neutrons, and perhaps hydrogen and oxygen atoms, and perhaps even small aggregations of water molecules behave is determined entirely accurately by the core theory. But if Strong Emergence for boiling is true, as soon as you get a significant aggregation of water molecules in a liquid state, then the microphysical laws of the core theory no longer describe what is happening. The laws governing what the microphysics are doing change because of the presence of high level structure.
Thus there is no way to accurately characterise the microphysical behaviour without reference to high-level structure. This is quite different from weak emergence or any current model of scientific/physical law.
Hey DM,
DeleteI'm saying that there's no reason to think that "strongly emergent laws overriding the microphysical ceteris paribus laws when high level structures are present" entails that there must be some discontinuity in our core theory of the microphysical, in which case the notion of strong emergence is meaningless.
Or, even if it does entail a discontinuity in our specific case, this discontinuity can still be captured under our core theory of the microphysical, in which case strong emergent laws might be meaningful, but not experimentally verifiable or falsifiable (ignoring logical positivism).
In the first case, it's worth asking whether the new laws emerge at some sharp discrete boundary, or as a continuous process. In the former example, the new laws are epistemically and ontologically relegated to certain spatiotemporal regions (where the high-level structures are present). In the latter example, the new laws are epistemically constrained to those spatiotemporal regions, but not ontologically constrained. Meaning the new laws do have an actual effect on all microphysical particles outside those regions, but such effects are simply not noticeable. We can draw an analogy in the latter example (continuity) with special relativistic effects, which in principle affects all particles in all speed regimes but is not epistemically noticeable outside of high-speed relativistic regimes.
I assume we are in agreement that any such continuous process is capturable by a fundamental theory of the microphysical. However, even if our strongly emergent laws did not ontologically affect all microphysical particles outside of some discrete spatiotemporal boundary (within the high-level structure), there's no reason to think this couldn't be captured by a unitary mathematical theory of the microphysical. For example, discrete processes like particle creation/annihilation are modeled by an underlying quantum field theory. Similarly, we can imagine some mathematical formalism of all particles, which describes different, weakly emergent, behavior in the presence of high-level structure.
Also, the idea of a discrete spatiotemporal boundary is not very plausible to begin with, since macro-physical structures like tables, brains or 'significant aggregations' of water etc. are not precisifications, but inherently vague concepts. For example, it's unlikely that your brain is just a set of a certain number of quarks, such that if we took one quark away, that would no longer be your brain. It's also hard to believe that micro-physical particles would spontaneously shift their collective behavior depending on the absence/presence of one quark/string. So, if strong emergent laws existed, they would likely be continuous to begin with.
Finally, even if it was impossible to incorporate the emergent behavior of the microphysical in the presence of high-level structure with our original microphysical particle behavior into one mathematical theory, that doesn't mean we still couldn't have a unitary core theory. For example, it might be true that general relativity is fundamentally irreconcilable with quantum field theory (i.e. quantum gravity is impossible), but we still consider both theories as part of our core theories. So even in the second non-reductionist case, we can see that there's no experimental difference between the strong emergence and weak emergence examples, it's really just a difference of semantics.
Best,
Alex
"Finally, even if it was impossible to... that doesn't mean we still couldn't have a unitary core theory. "
DeleteOops, I meant to say that even if emergent behavior was mathematically irreducible to our theory of the microphysical, we might still consider the mathematical modeling of that emergent behavior to be a part of our "core theories". Just as we consider GR and QFT to be part of our core theories even though they are potentially irreducible to a third fundamental theory. Physicists still have faith in reducibility regardless.
1. So, even if strong emergence was real (meaning that strongly emergent phenomena were mathematically irreducible to non-SE phenomena), we wouldn't be able to tell the difference (it's unfalsifiable/unverifiable). That's because a SE world is epistemically indistinguishable from a WE world where we just happen to lack the fundamental theory that would reconcile our different theories.
2. But in any case, as I already mentioned, there's no reason to think that:
A: [strongly emergent laws overriding the microphysical ceteris paribus laws when high level structures are present]
entails that
B: [there must be some irreducibility in our core theory of the microphysical]
A is totally tangential to B. In other words, we could have true irreducibilities in our core theory which are not in virtue of higher order structure (there are just gaps in the microphysical), and we can have phenomena that is in virtue of higher order structure, but which is nonetheless mathematically describable as weakly emergent phenomena.
Ergo, we either define strong emergence as A, in which case there is no meaningful difference between strong emergence and weak emergence. Or we can define strong emergence as A & B, in which case there is a meaningful difference between SE and WE (because WE doesn't entail B), but this difference is experimentally unobservable (due to the reasoning in 1).
"because WE doesn't entail B"
DeleteCorrection: *Doesn't necessarily entail B.
You can have higher-order phenomena weakly emerge from one mathematical theory of the microphysical, or a plurality of such theories which are collectively irreducible.
Hi Alex,
DeleteJust to be clear, I think that Strong Emergence almost certainly doesn't exist, and I'm sympathetic to the idea that there are serious problems with it. But I don't think it is the same concept as weak emergence, so I'm focused on pointing out how I see them as different if we can suspend our disbelief and imagine that Strong Emergence exists.
So I'm just going to help myself to all the assumptions that help to make this distinction clearer.
> it's worth asking whether the new laws emerge at some sharp discrete boundary, or as a continuous process.
Let's suppose it's a discrete boundary.
> For example, discrete processes like particle creation/annihilation are modeled by an underlying quantum field theory.
So, this shows that sometimes discrete discontinuous processes emerge from an underlying microphysical theory. It doesn't establish that this is always the case. I could write a toy model world as a simulation where some rules apply in some circumstances, and other rules apply in other circumstances, with no underlying unifying logic other than this sort of discontinuous if/else.
For example, in Minecraft, there are "microphysical" rules that describe how blocks behave and interact. Blocks can only be attached to other blocks. Sand falls straight down, rock does not, water flows, lava causes wooden blocks to burn, etc. You can build all sorts of interesting mechanisms by arranging these blocks and letting them run according to these rules. But if you place three iron blocks in a T shape and put a pumpkin on top, it suddenly becomes an Iron Golem.
Someone living in a Minecraft world might wonder if the behaviour of the Golem emerges from the microphysics of iron blocks and pumpkins, or if it is strongly emergent, completely new rules arising because of the presence of some higher level pattern. In the case of Minecraft, the answer is the latter. It seems to me that it is logically possible that strong emergence could obtain in the real world also.
> Also, the idea of a discrete spatiotemporal boundary is not very plausible to begin with, since macro-physical structures like tables, brains or 'significant aggregations' of water etc. are not precisifications, but inherently vague concepts.
DeleteI agree. This is a big problem for strongly emergent theories. But it's not clear that this rules out more precisely defined higher level patterns from having strongly emergent effects, like the Iron Golem. It's logically possible that some part of brains (microtubules!) or brain processes could be instantiating some such precise but yet unknown pattern. A significant aggregation of water, for example, might be 2000 water molecules within a volume of a few square nanometers, or more water molecules in a convex volume at least as dense. This is totally arbitrary and implausible but it's reasonably precise.
You go on to say that perhaps GE and QM really are irreconcilable. That seems to have its own problems, since there are circumstances where both apply. But that aside, if there is this kind of radical discontinuity in physics, this doesn't so much show that SE is the same as WE, rather it shows that SE is more like the real world than we supposed. We can take this emergent behaviour and add it to the core theory, but then the core theory includes strong emergence. Strong emergence is just the idea that there can be top-down causation -- high level structure impinging on low-level dynamics in a way that cannot be predicted from looking at low-level interactions alone. It's not really about what's in the core theory.
On your numbered points.
1. OK, so maybe we cannot be sure that a theory is SE. That doesn't mean that SE is the same as WE.
2. I'm not sure I follow the argument really, but your conclusion is basically just point 1 again. My point is to distinguish SE from WE as theoretical concepts. It doesn't matter to me if we can never prove empirically that some phenomena world is SE rather than WE.
Hey DM,
Delete"Just to be clear, I think that Strong Emergence almost certainly doesn't exist,"
Yes, I gathered as much from your article :)
Also to be clear, I am saying that the standard conceptions of SE are downright incoherent. We can modify our definition of SE to retain coherency, but then SE loses any explanatory usefulness and becomes unverifiable (and maybe unfalsifiable).
"You go on to say that perhaps GE and QM really are irreconcilable. That seems to have its own problems, since there are circumstances where both apply"
That's because they are continuous and not ontologically relegated to certain spatiotemporal domains. So, I don't think this potential irreconcilability would imply "SE is more like the real world than we supposed" since presumably your definition of SE (being discrete) is meant to be spatiotemporally relegated to certain domains (those where the higher-order strictures are present).
What this shows is that mathematical irreducibility is not sufficient to establish (your definition of) SE; we can live in a WE world which is irreducible.
"Strong emergence is just the idea that there can be top-down causation -- high level structure impinging on low-level dynamics in a way that cannot be predicted from looking at low-level interactions alone."
That's the standard conception yes, but we just showed this was incoherent. A Laplacian demon looking at the low-level interactions of the water atoms/quarks in those spatiotemporal regions where the higher order structures are present would in fact be able to perfectly predict the behavior of such particles. Once you admit that SE affects the motion of particles within the higher-order structures, you must admit that lower-level interactions within those structures will predict emergent behavior.
We can change our definition so that the standard conception is false, meaning that prediction of higher-level order based on lower-level interactions is not just a hallmark of WE, but is also manifested by SE. However, doing so means SE almost becomes meaningless/useless.
Under your idiosyncratic definition of SE:
A) Mathematical irreducibility is insufficient to establish strong emergence
B) Predictable emergent behavior based on micro-physical behavior is a necessary condition of SE (!)
C) It is true that "maybe we cannot be sure that a theory is SE. That doesn't mean that SE is the same as WE." But of course, strong emergence is meant to be experimentally verifiable under the standard conception, otherwise it wouldn't be useful.
So, what is the point of making the WE-SE distinction then? it has become completely divorced from the standard framework; we have basically sacrificed everything important that was meant to be originally captured under the 'strongly emergent' concept.
To be clear, the potential irreducibility of GM and QFT is explainable as being about different properties (gravity vs other stuff) of particles across all spatiotemporal space. Meaning there is no theory that can reconcile the different properties of these particles.
DeleteWhereas in your case of SE, it seems like you want irreducibility to be specifically focused on certain spatiotemporal regions.
Another correction, sorry!
Delete"A Laplacian demon looking at the low-level interactions of the water atoms/quarks... would in fact be able to perfectly predict the behavior of such particles"
*would in fact be able to perfectly predict the emergent behavior of the higher-level structures*
"the potential irreducibility of GM and QFT"
DeleteI was going to make another correction, but I actually think this was appropriate. You see, one of life's biggest mysteries is explaining how General Motors can consistently make such crappy quality functioning cars without violating the laws of physics....
Hi Alex,
DeleteI'm not particularly interested in the verifiability or falsifiability of SE for present purposes. I'm not sure I understand or buy your argument that it is unverifiable/unfalsifiable, but even if I did I see this as beside the point. The usefulness of SE as a concept resides in explanations of consciousness and free will. The proponents of SE seem to think that it is necessary to explain these phenomena for philosophical reasons. My point in this blog post was to grant the coherence and hypothetical existence of SE and argue that it doesn't really help with these explanations after all even so.
> What this shows is that mathematical irreducibility is not sufficient to establish (your definition of) SE; we can live in a WE world which is irreducible.
I don't think you've shown this. I'm assuming that SE is a sharp discontinuity because this makes the distinction from WE clearest. But I don't grant that a continuous gradation from QFT regions to GR regions is WE. It might well count as something like SE, just not like the clearer cases I wanted to focus on just to establish that the idea makes sense.
In truth, I think this sort of thing is not really a case of either WE or SE. It's more like Nancy Cartwright's model of physics being a patchwork of different laws in different domains. SE is like a special case of this sort of thing, rather than this being an example of SE.
> A Laplacian demon looking at the low-level interactions of the water atoms/quarks in those spatiotemporal regions where the higher order structures are present would in fact be able to perfectly predict the behavior of such particles.
The Laplacian daemon is supposed to know the laws of physics. But what you're saying is ambiguous, because I'm not sure if the daemon is supposed to know only the low level microphysics or if it is also supposed to know the rules about strong emergence.
If the LD only knows microphysics, then it would not be able to predict the behaviour of those particles, because their behaviour is being influenced by the existence of high-level structure the daemon doesn't understand.
If on the other hand the daemon understands the SE laws, then the daemon can only predict the behaviour of the particles successfully by noting the presence of high-level structure and applying the modified physics. This contrasts with WE where the daemon does not need to note high-level structure.
DM,
DeleteOne reply and then I have to go off to bed, sorry!
"I don't think you've shown this (GR/QFT discontinuity being WE)"
The GR/QFT discontinuity example is definitely a case of WE. The point is that GR and QFT still collectively account for all microphysical behavior, so in knowing them one would be able to predict all emergent behavior even if GR and QFT were collectively irreducible. Irreducibility in no way affects our predictive power, it just affects how complicated our math is.
"I'm assuming that SE is a sharp discontinuity because this makes the distinction from WE clearest...It might well count as something like SE"
If there is no spatiotemporal boundary (as in the discrete case) then SE laws must affect all particles in all spatiotemporal regions (the entire universe).
Here I'm not sure if you're claiming that:
1. All microphysical particles in the entire universe will be affected by the presence of SE laws due to emergent higher structure, or
2. Just those particles within the bounded spatiotemporal regions.
If 1, then the example of the boiling water means that as soon as sufficient numbers of water particles formed in the universe somewhere, then all particles everywhere would have become affected by this (to varying degrees of course).
About the Laplacian demon:
"I'm not sure if the daemon is supposed to know only the low level microphysics or if it is also supposed to know the rules about strong emergence."
The Laplacian demon only observes and knows the microphysical behavior.
"If the LD only knows microphysics, then it would not be able to predict the behaviour of those particles"
By definition, to know the microphysics is to know the behavior of the particles! Perhaps what you meant to say is that the LD could not know the microphysics just from observing the microphysical behavior, since she would need to know the SE laws. But the point is that the SE phenomena are lawlike and they govern the microphysical, therefore the microphysical follow lawlike behavior, and any LD solely observing the microphysical (and ignoring higher order structure) will be able to predict the behavior of those particles. The only way to deny this is to deny that the SE behavior is lawlike.
Addendum:
DeleteIf 1 is true, then by definition, the GR/QFT discontinuity could be a case of SE, but if it were a SE phenomenon, it would also have to be a WE phenomenon. Because by knowing GR & QFT, one could fully predict higher level behavior (assuming there were no laws in addition to that).
So, if you want to say that WE and SE are mutually exclusive, you have to stick to sharp boundary cases like in 2.
This comment has been removed by the author.
DeleteHi Alex,
Delete> The GR/QFT discontinuity example is definitely a case of WE.
It's not clear to me that it's a case of emergence at all. Emergence is about high level descriptions and low level descriptions being different. But for GR/QFT, we have different "peer" descriptions, i.e. one isn't higher level than the other.
> If there is no spatiotemporal boundary (as in the discrete case) then SE laws must affect all particles in all spatiotemporal regions (the entire universe).
The boundary could be fuzzy or quickly drop off to negligible effects. But let's assume we're working from an idea of a sharp boundary to see if that idea makes sense *without prejucide* for the question of whether there could be SE without sharp boundaries or whether the absence of sharp boundaries means WE.
So this means we're considering case 2 -- just those particles within the bounded spatiotemporal regions.
> By definition, to know the microphysics is to know the behavior of the particles!
By microphysics, I mean the ceteris paribus behaviour of the physical particles in small numbers where there is no relevant large scale structure affecting them in a SE sense.
You seem to want the LD to know how the particles behave in all circumstances. But if SE obtains, this means that the particles behave differently depending on what macrostates obtain in various spatiotemporal regions. So to predict the behaviour of the particles, the LD must know what macrostates obtain in their spatiotemporal regions. This is quite different from what the LD needs to know if there is no SE. In the latter case, the LD needs to know nothing about macrostates. It can predict the behaviour of particles from local conditions only.
"It's not clear to me that it's a case of emergence at all."
DeleteThe emergence wasn't supposed to be 'between' GR and QFT. The idea is that GR and QFT are still our core theories of the microphysical, but that they can be irreducible and still describe higher order structure behavior. This is weak emergence. The point is just to show that weak emergence can still happen even if there are mathematical discontinuities/irreducibilities in the behavior of the microphysical.
About the LD:
"that the particles behave differently depending on what macrostates obtain in various spatiotemporal regions"
doesn't entail that
"to predict the behavior of the particles, the LD must know what macrostates obtain in their spatiotemporal regions"
As an example, particles behave differently in black holes and stars when compared to empty space, but the LD doesn't need to 'know' the macrostructure presence of black holes, stars etc...
The only difference in the above example is that the different particle behavior is presumably captured by one mathematical formalism (the fundamental law that underpins QFT/GR). But now suppose that QFT and GR are irreducible, it follows that the LD can't predict the different behaviors with just one theory, it needs to know both theories. But this doesn't in any way entail that the LD can't know the behavior of the microphysical. The QFT and GR scenario are just examples, you can substitute or add "SE laws" to one of the above, and you get the same results. It doesn't matter what the theory is, so long as the behavior is lawlike.
So clearly, it's possible for the LD to predict higher order behavior from knowing the microphysics in case 1. As for case 2 (the discrete boundary case), there's no reason to think a boundary case would make any difference; both theories would still collectively describe the microphysical behavior.
*Just to hammer the point home in case you want to deny that we can substitute SE laws for GR or something. Take our irreducible scenario of GR and QFT, and now assume that GR exists in virtue of higher order structure. In other words, the GR laws only obtain in virtue of some gravitational structure like black holes existing in our universe; if black holes ceased to exist, then the GR laws in our entire universe would go with them. By definition, the GR law has become an example of a case 1 strong emergence phenomena.
But nothing has changed in the LD scenario, the LD still knows GR and QFT (they are part of our core theories), and collectively GR and QFT can predict the macrostate behavior.
This comment has been removed by the author.
DeleteI think we need to focus on the daemon in the context of SE because this is the key point to me. I don't see the talk of QFT and GR as leading anywhere as it's not clear to me what is supposed to emerge from what here, and they seem not to be the sharp discontinuity I want to talk about. Also, you seem to me to take too much for granted about how the LD can apply QFT and GR and make predictions successfully. We don't know how to reconcile these two, so it's not clear how this would work. So I just don't want to talk about QFT and GR any more in this context as for me at least it's leading to more confusion than clarification.
DeleteSo, suppose we have theory A by default, in empty or ordinary space say, and theory B within very special spatiotemporal regions of high level structures giving rise to SE.
Let's suppose also we're only concerned with case 2. Let's suppose for the sake of argument that the presence of microtubules can be precisely defined and that the presence of microtubules changes the laws of physics within those microtubules such that theory B applies within.
In order for the LD to make predictions in all situations, the LD needs to know both theory A and theory B and also needs to know where the microtubules are. The LD can perhaps derive from low-level descriptions where the microtubules are without being told explicitly, but the LD still needs to take this step -- looking at the low level description and seeking the microtubules within it.
The point is not that the LD cannot make predictions. The point is that the LD needs to first parse the low level description to try to find high-level structures before proceeding to apply theory A or B as appropriate. This is no problem really, it's just a marked difference from WE, where the LD needs to know nothing about high level structure and can make predictions successfully by applying one theory to local interactions.
You say:
> As an example, particles behave differently in black holes and stars when compared to empty space, but the LD doesn't need to 'know' the macrostructure presence of black holes, stars etc...
But this seems plainly wrong to me. If you are talking about strong emergence, and physics works differently inside black holes, then the LD needs to know if it is inside a black hole before proceeding to apply the appropriate theory in order to make predictions.
Hey DM,
DeleteNo worries, we can keep strictly to case 2.
Let's assume that theories A and B are not reducible to some fundamental theory C, so we are in agreement that LD needs to know theories A and B to make microphysical predictions. I will define "knowing the microphysics" as knowing theories A and B. The question is, can the LD know the microphysics without knowing that there exists higher order structure?
Well, it depends on what you mean by 'knowing' higher order structure! Typically, weak emergence is defined as the ability to fully and accurately describe a system without reference to higher level structure. For example, in our own universe without SE (we assume), a LD can accurately describe the physics inside a star, like nuclear fusion, without mentioning the concept of fusion. Instead, the LD just knows/describes things like the temperature and density configurations of the particles inside the star, and the end description based on these microphysical properties is the same.
Now take your microtubule example (case 2) of strong emergence. The strongly emergent laws which apply to the microtubules must, because of ontological reductionism, also apply to the microstates within the microtubules. Hence, we can simply translate talk of:
"If microtubule A is in position x, then the microtubule modifies property y"
to
"If the collective particles in set {a,b,c...} are in collective positions {x1,x2,x3...} then the particles will modify their micro-properties of {y1,y2,y3...}"
This is simply a consequence of ontological reductionism. For any description of the higher order structure/microtubules which satisfies the strongly emergent laws, the LD will be able to give a different description of the microstates which also satisfies them. By definition, that is weak emergence.
Hi Alex,
DeleteOf course you can describe high level structures in terms of low level entities and understand when to apply theory A or theory B in that way.
But such descriptions just are high level structures. You don't need to do anything like this in WE. Hence SE is different from WE.
Hey DM,
DeleteWeak emergence is simply the notion that low-level descriptions are sufficient to capture all behavior, fundamental and macro. That's the standard conception. Also in the standard conception, strong emergence is typically defined as the denial of a supervenience relation between the higher order structures and their constituent bases. In other words, according to SE, the higher order structure behavior does not supervene on the constituent base behavior.
But you've just admitted that the appropriate micro-level descriptions 'just are' descriptions of the higher-level structures, so you are not denying the supervenience relation.
As I said in the beginning, you can deny the standard conception of strong emergence, but this comes at the cost of your idiosyncratic definition just becoming a subset type of weak emergence.
Typically, in the standard conception, strong emergentists will try to appeal to the fact that all lower-level descriptions must fundamentally underdetermine higher level descriptions, because of vagueness. But either such vagueness is epistemic, in which case there is no underdetermination in reality, or it is ontological (whatever that means), in which case you can't hold to ontological reductionism (as many also want to do). So, I'm just saying that the standard conception is incoherent.
By contrast, I think Chalmer's conception of SE, which doesn't require 'top-down' causation, is much more coherent.
> Weak emergence is simply the notion that low-level descriptions are sufficient to capture all behavior, fundamental and macro.
DeleteOK, sort of, on a certain interpretation. It depends what you think counts as a low-level desription. Once your theory depends on the existence of macroscopic features, whether or not described in terms of microphysical features, then this is no longer weak emergence. Your combined theory (A+B) in fact requires you to treat these high-level features as first-rate entities, just because they change which rules (A or B) apply. Again, this is nothing like weak emergence, which is entirely blind to high level features.
I don't think this is just my idiosyncratic interpretation. I think this is me clarifying a point you find confusing from other descriptions of strong emergence.
In particular, Chalmers does describe a version of strong emergence which involves top down causation (as well as one which doesn't) in this paper: http://consc.net/papers/emergence.pdf
Hey DM,
ReplyDeleteThanks for the paper link. I’m familiar with Chalmer’s conception of downwards causation emergence, it’s actually what I originally had in mind when I was criticizing strong emergence from the beginning.
:)
I think we unfortunately got a little bit sidetracked after I dropped my GR/QFT points. We kind of ended up migrating into a debate about whether the standard conception of strong emergence is coherent, until you brought us back to where we started by bringing up Chalmers. I think if we want to tackle this second variant of SE (the downwards causation kind) which Chalmers proposes, we will have to circle back to my original points. I will endeavour to make more sense this time around though. Apologies for the long posts ahead.
To start, I should say that when I refer to your 'idiosyncratic' definition, I'm just referring to the one we established that you were assuming (e.g. the discrete boundary case with the microtubule example). I would also describe the standard conception of strong emergence to be the view that low level descriptions of facts don't fully describe the system behavior. This is the sense in which Chalmers refers to consciousness since low-level microphysical facts do not describe consciousness (according to Chalmers). I think this standard conception is coherent on epiphenomenalism (no "downwards causation") but not really so on interactionism.
We can contrast this with the more 'intermediate' kind of strong emergence which utilizes downwards causation (what I was originally attacking), and which Chalmers describes in the paper you linked. Let’s define intermediate strong emergence like so:
ISE: The microphysical facts fully determine the system behavior, but the microphysical laws are insufficient to determine the macro-state and system behavior.
To elaborate, we can distinguish between two kinds of ISE.
1. Our current known microphysical laws are insufficient to determine macro-state system behavior
2. No possible microphysical law(s) would be sufficient to determine macro-state system behavior.
1 is of course conceivable, but also ends up being a purely epistemic distinction. If ISE just means 1, then what we call strong emergence is just our epistemic limitation. In reality, the right kinds of microphysical laws could exist, but we just might not know it. So, 1 is an ontologically meaningless distinction.
DeleteThe idea behind ISE2 is that the microphysical laws which are solely about microphysical properties won’t be able to describe strongly emergent processes. The problem with this is that it blurs 1 and 2. What constitutes a microphysical property is determined by our microphysical theories!
For example, electrons having spin is a consequence of quantum theory. So, I think Chalmers is inherently conceiving of ISE as 2, but then is thinking of microphysical properties through the lens of our core theory. However, there’s no reason to think that we have to use our core theory to determine microphysical laws (unless we define ISE using version 1).
Nothing stops us from adding new properties to the microphysical, which collectively can account for the discrete strongly emergent behavior. Just as a hypothetical example, imagine my coffee cup spontaneously vanishes when it reaches a certain volume threshold. This strongly emergent behavior can be fully accounted for by adding a hypothetical property to micro-physical atoms. Just add a mathematical vanishing operator that every quark has.
This vanishing operator is hypothetical and doesn’t manifest unless it crosses a certain threshold. The vanishing operator outputs zero for every degree of freedom of the quark past this threshold (meaning the quark vanishes) and makes no impact below it. We can also tie the operator’s function in such a way that it only reaches the threshold if the quark happens to be in the coffee cup at the right certain point (when the coffee cup actually vanishes).
Can we do this for any hypothetical strongly emergent law? Yes. A simple consequence of the mathematical underdetermination of behavior is that any description of macro-physical behavior should be infinitely underdetermined. I should always be able to come up with a mathematical theory of the microphysical which fully accounts for any macro-level lawlike behavior (granted, it might be absurdly complex). Thus, any strongly emergent law can also be fully describable using weakly emergent microphysical laws.
So ISE2 doesn’t work. Is there any way to perhaps save it? Well, earlier I proposed that we might add an irreducibility criterion to our SE concept (this is when I brought up the GR/QFT stuff). Why do this? Because we might protest that the microphysical substitute laws (e.g. vanishing operator) serve no purpose except to explain the weird SE behavior.
DeleteSo, we might draw a distinction between weak emergence and strong emergence by pointing out that in strong emergence, there is an irreducible mathematical component theory added to the core theory, whose only purpose is to explain the strongly emergent behavior.
Indeed, there is no reason to think that we can mathematically reduce our SE-substitute microphysical laws with our core theory (meaning that there is a third and simpler theory which subsumes both). So, maybe we can just add this irreducibility criterion to ISE2. Meaning if (no possible reducible microphysical law(s) are sufficient to determine system behavior), then SE is present.
The problem is the GR and QFT examples, which I earlier brought up. The idea is that we can imagine that in our WE universe, GR and QFT happen to be mathematically irreducible. Meaning that there is no third theory which can subsume GR and QFT. But according to our ISE2 + irreducibility criterion, this now means that the GR/QFT case counts as SE.
That’s because the microphysical facts still fully determine the system behavior, but now there’s no irreducible microphysical law(s) which governs the system behavior, so by definition GR and QFT seems to be a case of SE. But then we would have to acknowledge that SE is derivable from our core theory of physics, which undermines the whole point.
So that’s where we last left off, before I dropped the GR and QFT points, and we kind of ended up in a little detour about what we think the appropriate semantic account of strong emergence is. I think the lesson here is that the straightforward conception of ISE2 is incomplete/incoherent. I think Chalmers simply failed to account for the fact that our microphysical properties are relative to our laws. It seems like he was implicitly using a ISE1 conception to formulate a ISE2 definition.
That said, it is certainly possible to make ISE2 coherent. We can try adding on more ad hoc criteria, for example we can add in the clause (irreducible + irreducible component only explains strongly emergent behavior). But then we will need an account of what counts as strongly emergent behavior (which we were trying to solve in the first place!). We will have to add in more ad-hoc criteria to solve this demarcation issue. Such as, only behavior that is solely relegated to sharp spatial-temporal boundaries inside structures that we identify as higher-order structures gets to count etc.
However, this now comes off as a bit arbitrary since what we identify as higher-order structures is subjective to our brain categorization schema. In any case, I concede that it could potentially be done, but we’ll just end up with a long list of ad-hoc criteria, and our definition will be rather useless in the end.
The whole point behind strong emergence is to identify some possible objective phenomena in the universe which makes a clear difference, but we’ve just ended up with a totally ad-hoc account which seems anything but a natural kind.
I hope this helps clarify my earlier points and gets my view across more clearly.
Best,
Alex
Hi Alex,
DeleteThanks, I think I'm closer to understanding where you're coming from.
I still think you're failing to really justify the point that my idiosyncratic example of SE (microtubules or your coffee cup) is actually WE.
> We can also tie the operator’s function in such a way that it only reaches the threshold if the quark happens to be in the coffee cup at the right certain point
This is what makes it strongly emergent. The threshold is only reached if the quark is in the coffee cup, which means that the coffee cup, a macroscopic object made of many many quarks, is a first-rate, indispensable entity in your physical theory. Your theory cannot be expressed without describing what a coffee cup is, whether you call it a coffee cup in so many words or just describe the detailed relationships of all the particles that make a coffee cup to each other without labelling them as a "coffee cup".
So how about this for a definition: SE is when the irreducible entities a complete theory must describe include both fundamental elements and particular patterns of those elements where special rules apply.
I think this definition gets around your suggestion that SE is always only relevant to some particular model of microphysics.
This definition bears little relation to what strong-emergentists talk about, but I hope it's clear that I'm just trying to "robustify" what they are vaguely hinting at. I think what I'm describing is the same thing they are, but I haven't seen any detailed analysis of how it is supposed to work so I can't be sure.
A nice example of this might be the moves of chess. The ceteris paribus microphysical laws of chess describe how individual particles (pieces) may move in most cases when special rules brought about by certain patterns don't apply, and they can all be explained with reference to one piece at a time. But certain strongly emergent laws require particular patterns of multiple pieces. Castling is one, pawns capturing diagonally/en passant are examples of additional SE possibilities. The illegality of placing oneself in check is an example of SE removing possibilities.
Since we're just discussing whether the idea of SE is coherent and not its applicability to the real world, I would encourage you to engage more with toy examples having simple discrete models like chess, Minecraft or perhaps something based on Conway's Game of Life.
Of your proposed interpretations of this discussion, I don't think we're discussing either 1 or 2, since neither of us believes in SE. We're just discussing whether the concept is coherent or different from WE. I think SE proponents would go for 2, though. They are motivated by a belief that only a more holistic non-reductionist approach could account for whatever it is they think they need to explain. This is why I think they would endorse the idea that SE requires both microphysical entities (e.g. quarks and electrons, or strings, or quantum fields, or what have you) as well as higher level patterns of those microphysical entities giving rise to "holistic" behaviour.
Hey DM,
DeleteThat's great. I'm glad we've made a little bit of progress. If I understand your objection, it's that no microphysical lawlike model will be sufficient to describe the system, because there will still have to be a SE lawlike description somewhere in the microphysical model. We can try to 'sneak in' the SE law by describing it in microphysical terms, but there still has to be a radical discontinuity of scale at some point in the lawlike description.
For example, the law that "if (describe coffee cup in microphysical terms) exists and if (describe volume threshold in microphysical terms) is realized, then (describe the right kind of microphysical changes)" is still radically discontinuous with the other microphysical laws that the LD knows.
I think however that this is false. In the coffee example, there is no such substitutable SE lawlike description that the LD uses. When I said that we can describe the vanishing function "(in) such a way that it only reaches the threshold if the quark happens to be in the coffee cup at the right certain point" I just mean that as shorthand for the various types of microphysical laws that would stand in for the SE law. But crucially, these microphysical laws are not identical stand-ins or equivalent substitutes for the SE law.
This is way too complex to show in the coffee example (hence why I used the shorthand phrase in the first place), so maybe it's best to go to a toy model example as you say.
Okay, so imagine in our toy model universe we have one kind of atom element type in a three-dimensional space. There is only one kind of microphysical law (at first) in this universe, the law of gravity. In our universe we also have a ball made of this matter, which obeys the simple gravity rule. Now imagine that we introduce a new SE law, that at a certain amount of mass our ball inexplicably vanishes.
DeleteWe can now add our vanishing operator to the atoms, just like before. When this vanishing operator crosses a certain threshold, the ball will vanish. How do we achieve this? We just make the atom's vanishing threshold correlate with a certain amount of force. When the atom experiences this value of force, it will vanish, according to the rules of the vanishing function. It just so happens, that this force will be experienced (in our simple universe) iff the gravity threshold is reached.
Note that there is no law that the LD has to know which is microphysically substitutable one-to-one with the SE law. For the LD, which just keeps track of the quark's properties (including the hypothetical vanishing property), the entire vanishing process is continuously described. There is no discontinuous gap in the descriptions of the microphysical laws.
It's much harder to imagine this with the coffee example, because there are so many microphysical properties to take into account when we are attempting to describe a coffee cup. Clearly however our physical coffee 'type' has to have a uniquely describable set of properties in purely microphysical terms. For example, the quarks in my coffee cup each have a unique spatiotemporal location & worldline. So long as our vanishing operator's threshold was set to activate when the right kind of conjunction of microphysical states was reached (unique to the quarks in my coffee cup), we are set. Moreover, this lawlike process is solely continuous from the LD's point of view. The threshold activation is a discrete process, but the value of the vanishing operator is continuously describable. So, there is no radical discontinuity of scale that would mark a clear-cut case of SE.
The only difference is that this super complex microphysical function is utilized solely to describe the SE process. Whereas our current microphysical properties have multivariant functions. An electron's charge, for example, is not just hypothesized to describe what happens in a cup of coffee. Yet this just goes back to my irreducible criterion.
The bottom line is that it seems like SE is relative to one's microphysical model. And what we describe as SE is really just a subset of certain microphysical models (those that meet the ad hoc criteria of being irreducible & relegated to certain discrete spatiotemporal boundaries, among other things).
Furthermore because of mathematical underdetermination and the fact that any macro-scale object has unique microphysical features, it will always be possible to come up with a mathematical model that can fully describe the entire system, just by tracking those relevant microphysical features. Of course, such a model would be insanely complex, and reducibility is in no way guaranteed.
Oops, I should probably qualify my example by noting that the atoms/ball are point-like. No Pauli's exclusion principle or anything like that. All the atoms are in the same spot and should therefore experience the same gravitational force. Otherwise, the analogy won't work!
DeleteHi Alex,
DeleteOn your toy example, if there is really only gravity then there will be no balls. Assuming the particles have mass and attract each other gravitationally, they will simply fall into and orbit around each other forever. Same if you try to make a ball, the particles will just fall through each other and orbit around their center of mass. I don't think they'll even spiral into each other to form singularities if there are no other forces to dissipate energy. Maybe we could say that such an aggregation will disappear if it gets massive enough. This might look like SE but perhaps this is only because particles disappear when local spacetime curvature is sufficiently extreme. If so, I agree that this feels like WE. But this just means that it's not a very good example of SE, because we can define all the rules without reference to high level patterns.
I guess you have a harder case to prove, because to make your point, you have to show not only that all particular examples I can come up with could be accounted for by WE, you have to show that there can be no example that can be accounted for by WE. I don't know how you could demonstrate that. It at least seems conceivable to me that there are models which couldn't be given a WE-style reduction. The hypothesis that this could be the case is just the hypothesis that SE is coherent.
Take a toy model based on Minecraft instead. Everything I'm going to describe is true of Minecraft, but I'll leave out a lot of irrelevant detail. So let's say it is a world made of 3D cubic blocks, but also containing non-block agents. There are different types of blocks, but for our purposes most of these differences don't matter. Agents can move and interact with blocks (destroy, create, move, transmute) according to some algorithm but are not themselves composed of blocks.
So far I have described a sketch of the microphysics of this world. The SE law I want to introduce is that when certain types of blocks are arranged in a certain pattern, they disappear and are replaced by a certain type of agent.
How could you account for this with WE?
Hey DM,
DeleteWe can just stipulate in the toy example that the initial conditions of the ball are such that all the atoms would collapse into a point like space. We can also imagine a discrete spatiotemporal universe, so we don't get absurd math leading to singularities/infinities.
Anyways, you are right that I will have to show that every example of possible SE in our universe should be capable of being modeled in microphysical terms. I think the key point I earlier made to buttress this point was that every macro-physical object will have unique microphysical identifiers.
The reason the (correctly updated) toy model example works, is because it just so happens that that every atom in that universe experiences a unique microphysical feature (i.e. a certain amount of force) iff the strongly emergent law is in play. Well we can also see that this must be true in my coffee example (and for any example), so long as we hold to ontological reductionism. Every quark in my cup of coffee has unique microphysical properties (for example a unique location) at the time of its vanishing, that is not replicated by any other quark in the universe.
In the coffee example, the vanishing threshold would be reached iff any quark meets certain criteria corresponding to a large disjunction of possible microphysical features. Where each feature tracks a set of microphysical properties that is unique to a quark in my cup of coffee at the exact time of vanishing. The set of disjunctive features picks out all the possible states of all the quarks that exist in my cup of coffee at the exact moment of vanishing. And it's disjunctive because no quark could be in more than one hypothetical state.
Because no other quarks in the universe meet those exact properties, it would follow that if any quark happened to meet any feature (where each feature is a complex set of micro-properties) picked out from this disjunctive set, then it must be in the coffee cup at the exact time of vanishing.
Also, as I earlier mentioned, this process is continuous. As a particular quark in the universe happens to get closer and closer to matching some of those properties, the operator value will increase, but it won't reach the actual threshold value until the exact moment of the vanishing process. Granted, it would be super difficult to construct this model because we have a large disjunctive set, and each possible feature is different (so it's not obvious what should count as "approaching the threshold value"), but in theory there should be a function which would be able to assign appropriate weights to each value.
Notice this will also be true for any macro-physical SE phenomena. As long as that phenomenon is expressed in microphysical terms, and as long as those microphysical features are unique (and not multiply realizable by different quarks/atoms not experiencing the phenomena), then we will be able to construct a model, purely focused on those unique microphysical features, which accurately replicates the SE feature.
In the coffee example, the model replicates the moment where my specific coffee cup vanishes, but we can also generalize this to a macro-physical rule that all coffee cups vanish past a certain volume threshold etc...
And all of this simply follows from the fact that every macroscale phenomenon will have unique microphysical identifiers. We can also get into your Minecraft example if you wish, but I see no need since the important idea is to recognize the general rule. Not sure if this post made much sense, let me know if you found any point confusing.
Hi Alex,
Delete> Because no other quarks in the universe meet those exact properties, it would follow that if any quark happened to meet any feature (where each feature is a complex set of micro-properties) picked out from this disjunctive set, then it must be in the coffee cup at the exact time of vanishing.
I don't think I have a clear picture of how this would work or what sorts of features you are talking about. If all these microphysical properties are basically serving to "index" each quark, then it seems to me that you might be more or less "scripting" the disappearance, by doing something akin to saying "these specific quarks disappear at time t" and "these specific quarks disappear at time u", etc. But if so, this isn't really lawlike. It doesn't help you to make predictions about hypothetical or counterfactual situations. It's just a list of all the disappearances that have been observed to happen to date, with no explanation of how or why or when we should expect to see other disappearances.
I really would prefer if you'd engage with the Minecraft example, as it seems pretty clear to me, whereas I'm lost with how your coffee cup example is supposed to work.