The Challenge Network

   go back   

Science, determinism and free will

Science, determinism and free will

Introduction

This short text was written for Nature, and published on their discussion forum. We have re-posted it here because philosophical issues are, aside from being interesting in their own right, increasingly subject to real-world implications as we come to know more. We know, for example, that a great deal in the brain is constructed, although we do not (yet) know that everything is of this nature. If there is a non-material 'interface' - that is, where the soul (necessarily) physically interacts - we have yet to find it, or find the locality of a brain lesion that removes it from play. However, it is also true that we do not have a vestige of a notion as to what generates a sense of awareness, or even how to talk about the problem which this represents.

We know quite a lot about the brain, and hugely much more with every passing year. We have mapped in some detail the pathways in the brain that which handle decision-taking and learning, moral choice and play. We have a pretty good idea of how categories are generated, and how a percept is located within the space which our previously-learned percepts then generate. That is: if we see something that lights up the percept categories "round", "moderately heavy" and "edible" we are pretty sure that the field of cells that in the same way represents "fruit" will be excited as a result of this. There is an approachable mathematical way of describing such a mapping. Why this feels like anything, however, or why and how there is anything there to do the feeling, remains an abiding mystery. It is a mystery that lacks a language even to talk about the problem.

We are strongly determined in our choices by what we are, by what we have learned and by the interpretation or narrative that we apply to our circumstances. However, we do not know whether we are determined in the sense of Laplace's clock work, whereby the universe is a gigantic machine in which everything is determined and in which we are just a part, as determined as the rest. This note argues that this is not the case.

If it were to be case, however, it would have a number of deep implications. Those of an ethical nature are self-evident: we do not choose, but rattle along preordained paths. Religions, the law and day-to-day pragmatism all militate against this interpretation. However, the implications to physics are far deeper. The prevailing cosmological model is essentially timeless: that is, everything is a four dimensional solid in which one of the axes is time-like, and every particle is a thread that runs, frozen, through the solid, following this axis. Fields and field lines behave in a similar way. However, the prevailing particle model is not at all compatible with this, for particles are supposed to be innately indeterminate until 'renormalised' by interactions with many other particles. The present is the nexus of renormalisation, the past presumably frozen, the future indeterminate.

These two views are, plainly, incompatible unless one allows for a multiverse. There are at least three separate multiverse theories. The earliest came from Everett in the 1950s, which suggested that every time a quantum particle was renormalised, what happened was the universe split and an infinity of new versions calved themselves off, in which slightly different versions of the renormalisation took place.

A later version of this has us existing between a less infinite, but still immensely numerous, set of universes which are 'voted' around a centre of weight by the forces of renormalisation. However, 'now' is a nexus that overlaps the closest of these, and a given renormalisation can drift us this way or that. Presumable some clusters of universes will have drifted very far from our consensus reality, but still be connected in ways that allow the apparently instantaneous and distance-independent collapse of entanglement that we observe. ("Entanglement" is the term applied to pairs of particles that have established a shared dependency on a quantum observable: for example, both have aligned spins. Renormalising one such spin - which could be in any direction - appears instantly to force the other into a correspondingly determined state. That both particles have instances of themselves in an infinity of multiverses, some close to our reality, offers the connectivity that could let this happen. Another approach is to have space itself constructed from quantum observables - such as the four axes of graviton spin, or via the more fundamental views of loop quantum gravity - but that is another topic.)

Time-like events are in the blurred multiverse are constructed from transactions between this overlapping - that is, occasionally indistinguishable - family of universes. It is directional because travel on one direction requires much more co-ordination that movement in the other: it is less probable, and on planace, things run down hill to a more dispersed and diffused set of universes than towards a situation in which there is just one such universe, in which everything is piled up in an improbable heap.

The third flavour of the multiverse idea is irrelevant to this issue but I will mention it simply for clarity. It holds that the Big Bang - or a succession of Big Bangs - generate an infinite number of universes, each with different physical constants. We just happen to live in one of them. Smollin speculates that the formation of a black hole might trigger the pinching off of space so as to create a new big bang universe, and that if some degree of parenthood is assumed to pass through this process, then universes will evolve serially to generate conditions that are ideal for the generation of the maximum number of black holes, and thus progeny universes. Fun, but not here relevant.

Against determinism

We can think of systems as being 'weakly determined' and a 'strongly determined'. Strongly determined systems are Laplacian, in that everything about them is known, from the rules that govern their parts to the exact state of those parts. Strongly determined systems must be known at their most fundamental possible level. Weakly determined systems are known in terms of the phenomenological relationships, accepted as laws if they meet certain criteria, and the more or less exact state space of that system.

Aside from the infinite precision that non-linearity demands and Heisenberg forbids, strongly determined systems have other conceptual problems. One of these is the nature of the "laws that govern them", a glib phrase that implies fundamental knowledge, However, an issue for physics when it considers a theory of everything, from what are these fundamental laws constructed? If they describe the deepest things that can be, then are they too made from these deepest things? Thus, recursion into mathematics with the vague hope that it will all boil down to groups, primes or something else of which Kant would not approve.

Weakly determined systems obey phenomenological laws: we do not say why this or that works, merely we note that it does. One gas molecule cannot display heat or temperature, many do so. The laws emerge from the statistical properties of the system - another arm wave - much as prices emerge from a market. That is - and this is a non-trivial observation - regularities exist in only retrospect, when they have been negotiated into existence by huge numbers of agents, be they buyers and sellers in a market, organisms in an ecology or renormalised particles in a lump of material. If these transactions do not occur, then the regularities - prices, pressures - are not realised but latent in the dynamic of events; what I think Heidegger meant by Dasein - the dynamics of being - and Seinkönnen, potential for being. Regularities such as gas laws are not, therefore, intrinsic things that stand outside of the business of being, but the consequences of endless interactions and negotiations which, because they are conducted by simple and universally identical entities, always give the same outcome. Thus, we see a law, but it is a law cooked up by the things to which it applies. This is not an artifact of our description, but how reality assembles itself.

Let us introduce the related, if blurred, concept of emergence. At its most basic, it represents the common observation that simple things, when connected together, give rise to more complex properties: much what we have seen for phenomenological laws, as expressed in the paragraph above. One can see this best in our descriptions of things, so I will use an example which could be criticised as being only about our way of seeing events. However, some thought will show that this is not the case.

Consider a perfectly described ant: we have it modelled, and we have the model in perfect homology with the physical ant in its physical environment, poised and ready to run. Being a model, it is perfectly determined. Being a perfect model, it sees what the ant sees, and does what the ant does. the two run in parallel, with the model perhaps a tick ahead of the physical ant. The ant has never seen another ant before, and so neither has the model. Our ant, however, encounters another; and between them they quickly cook up 'ant social behaviour'. The same thing happens in the model: ever assuming that ant social behaviour is emergent in silicio as well as on silica, the model is changed, made more complex, explores a broader state space than that for which it was designed. It has a new bunch of rules that were not there before. These are regular in retrospect, but did not exist in prospect, save as what Martin Heidegger called Seinkönnen. That is, the system is weakly determined only after the event. Before the event, it was also weakly determined but by a lesser state space: at the event, it made a transition which could not be determined because the rules to determine it had not yet been constructed.

Evolution has precisely this form: before something evolved to emit certain properties, those properties were not emitted. The regularities associated with the new degrees of freedom that they introduce into the system are, therefore, new rules. Apply, please, the same thoughts to information processing or cognitive structures. Prior to learning, they are bound by one set of rules, after learning, by another. However, the new rules are not predictable from the old, as they transcend their state space.

Is this refutable in terms of strongly determined systems? As already noted, there are deeply dodgy assumptions built into the idea of a strongly determined system that make such a preparation essentially impossible to generate. Science pins degrees of freedom, but it cannot totally know a system. However, as a thought experiment built on a quaking bog of impracticality, does strong determinism refute emergence? I think not. Electrons are being pushed around in the processor of the machine on which I am typing this. The immediate reasons are duly scrutable - finger, button, IT magic, charges - but have to be cast in a wider and wider set of 'whys' to encompass a total explanation of the electron's experience. Why was that machine delivered to my door? Why did my cognitive history and architecture lead me to respond, but also why did it rain on Sunday and make me stay indoors and read the article to which I am responding; and..? Robbed of grand generalities, strong determinism fades into a blur of contributory factors that have to evoke the entire light cone of the planet and everything impinging on it. I type this because of how certain particles collapsed out of the false vacuum of the Big Bang? Indeed, everything is the way that it is because - by accident or design - it was created thus ab initio? Oh, please - pass the parsimony.

 to the top 

Links to Interactive Trek Guides sites for Peru and Nepal.

The trek Peru web site. The trek Nepal web site.