“The huge gulf that separates physics and biology -the realm of atoms and molecules from that of living thing- is unbridgeable without fundamentally new concepts.”
The quote that begins this article is one of the main theses that Paul Davies defends in his book published in 2019 “The Demon in the Machine“.
There is no need to introduce Paul Davies as he is well known for his research and popular science books in areas such as quantum mechanics, cosmology or astrobiology. As he himself says, he likes to tackle the big questions and try to find answers to the fundamental questions of the origin of the Universe and Life.
Davies’ interest in crossing the barriers between scientific fields and taking a more holistic view led him to write this interdisciplinary book. In it, physics, chemistry, biology, computing or information and network theories intersect.
If I had to synthesize the content and objective of the book, I would do so in these two basic ideas:
- He argues that he doesn’t believe in the reductionist (extreme) approach to explaining the emergence of life and the functioning of living beings.
- It raises the fundamental role that Information in providing answers to these questions.
On the first point, Davies argues that known physics alone can’t account for biology. Although the components of life comply with the laws that atoms and molecules blindly follow, these principles are insufficient to explain the emergence of causality that we observe in the world of living beings.
Nor does the modern synthesis of Darwinian evolutionary theory, genetics and molecular biology fully explain how these goals or purposes that show the constituent elements of life emerge. He understands that additional principles are required that we do not yet know.
Of course these questions about what life is, how it emerges, what controls the functioning of living beings and their constituent elements have been raised on many occasions throughout history in the fields of religion, philosophy and science. In fact, he cites the physicist Erwin Schrödinger and his influential lectures and book “What is Life?” which inspired the later discovery of the genetic information mechanism of DNA by Watson and Crick.
There have been different kinds of responses: divine origin (religions), vital force or impulse (vitalism), universal consciousness (pansychism), vision of the whole – the whole greater than the parts (holism, organism) or physical-mechanical (reductionism), to name the most relevant.
Davis poses answers within Science – even if those principles or natural laws are still undiscovered – leaving aside ideas of a supernatural, spiritual or proto/pseudo-scientific nature.
He explores a line he sees as promising: the role that information can play as a high-level, contextual, causal principle that takes control of the development and functioning of the living being.
What does this mean? For information, a logical-abstract concept instantiated in living organisms from physical-chemical components and material phenomena (nitrogenous bases of DNA, electrical gradients …), is an ‘agent’ capable of directing and manipulating this material substrate on which it’s based, being able to introduce rules according to the state or context in which it’s operating.
Does it still sound convoluted? So we will use a familiar analogy: life is the software that controls the hardware of living beings, giving instructions to the machine on which it runs. Just as a program behaves according to the inputs it receives (from a user, from another program, from a sensor…) so does the vital system according to the wide range of physical-chemical signals it can receive at different levels.
In this regard, I am reminded of the statement by physicist Seth Lloyd which I find very enlightening:
“Information and energy play complementary roles in the Universe: energy makes physical systems do things. Information tells them what to do”
How does Davies explain this role of information? He doesn’t detail it, but puts forward arguments, examples and reflections that shape his speculative hypothesis.
On the one hand, he highlights how many basic components of life work as a “Maxwell’s demon“. This is a thought experiment that the great physicist J.C. Maxwell proposed in 1867 in which a fictitious being could go against the second law of thermodynamics by separating molecules according to their speed into two compartments of a box. This would generate a temperature difference and “free” work could be produced.
This little “demon” could therefore decrease the entropy of the system without energy consumption, going from a more probable and disordered state to a more improbable and ordered one by playing with the information of the system (molecule’s position and speed).
Note: the explanations in the book are clear, but if you don’t quite understand entropy and its relationship to information, you can try Arieh Ben-Naim’s book “Entropy Demystified“.
As a curiosity, it was Lord Kelvin who later referred to a “demon” where Maxwell spoke only of “being”. Kelvin used that term because of the intentionality it showed, causing a change in the normal behaviour of the system.
This paradox was resolved in the 20th century with successive contributions from Szilard, Landauer and Bennet, and it was finally concluded that there was indeed a waste of energy and generation of entropy. It is a very subtle and counter-intuitive argument that has to do with the act of erasing/resetting the information in the memory of that fictitious being and can’t be less than the Landauer limit of only 3×10-21 Joules (at room temperature) to erase 1 bit of information. This is the theoretical limit of consumption of an ideal computer.
Note: for those interested who are not familiar with this subject, apart from the original articles by Bennet that can be found on the Internet, I liked how it is explained in “Minds, Machine and the Multiverse” (2002) by Julian Brown where the example used by Bennet of DNA transcription by RNA polymerase is included.
He then introduces the information engines that are based on Maxwell’s ‘paradox’. They can turn information into work by taking advantage of the small thermal fluctuations in a heat reservoir – they are called Brownian motors.
In order to force the result of the interaction of the thermal agitation with the system to be in only one direction, a blockage is included as a rectifier (let’s think of a didode that only lets the current pass in one direction) and in this way work can be done in exchange for the increase in entropy due to the transformation of the information.
In the book he includes a conceptual example by C. Jarzynski where a set of paddles confined in a space by some rods (equivalent to having a series of 0’s) after hitting a tab by the effect of thermal agitation become able to be in any location (the equivalent to registering a series of 0’s and 1’s as a result). As the system is mounted, it forces the movement of the tab and ring where it is mounted to be on average counterclockwise (video of this Brownian engine model).
Well, living organisms are full of molecular nanomachines that function as information engines, always a little above that Landauer limit, but with much less energy consumption than our state-of-art computers.
Davies presents several examples of these biological nanomachines, as well as experiments in nanotechnology that have gone in this direction to better understand how they work. All the examples I think are much better understood if you also view the videos that the book refers to (there are additional examples by searching the Internet).
But Davies tells us that living organisms not only store information (DNA) or create these information engines, but also process the information like a computer.
He tells us about information patterns that behave as if they were coherent, independent entities in themselves, not conditioned by the topology of the networks where they are expressed. They are apparently governed by their own natural laws and have special characteristics in terms of information flow control that determine the functioning of that system.
To return to the computer analogy: we see the biomolecular components -the hardware- and the ‘dance’ of signals and intricate interactions that occur between them -the execution of the program-, but we cannot see the software itself, the informational organization, the logic of life that must have principles that we have not yet discovered.
Note: in theory, DNA would be at least a part of that set of instructions we are looking for, although it reminds us that protein synthesis with a whole network of elements and gene inhibition/activation factors is a much more complex information system.
He presents an example by modelling the life cycle of the schizosaccharomyces pombe yeast (also called “fission yeast” because of its cell division mechanism that produces two cells of the same size).
They designed a very simplified network where the nodes represent genes (or strictly speaking, the proteins that encode the genes) and the connecting lines of the network are the chemical pathways that link the genes. The activation of one gene (or expression of a protein) causes the activation or deactivation of others.
They broke down the cycle into steps, assigning each gene in the model the value “1” (on) when it was activated and “0” (off) when it was inactive.
Analyzing the data, they saw that there was a control kernel of several genes that seemed to orchestrate the functioning of the rest of the network and that it was possible to infer the value that a gene would take from another with a greater probability than if the network were random, that is, more information was transferred than theoretically expected. The curious thing is that the transfer of information did not always take place between ‘connected’ cells, but 40% were correlated without having a physical connection, and 35% of those that do have a physical connection were not correlated: there was no obvious relationship between the information patterns and the circuit topology.
It also points to emerging effects that arise in network topologies with a high degree of integration and interaction (for example as a possible explanation for the emergence of consciousness in neural networks). These networks generate distributed information that goes beyond the information generated by its subelements, i.e., the whole is greater than the sum of its parts.
Is there anything else in the book? Yes, much more.
For example, a lot of questions that lead us to search for more explanations:
- What is life or consciousness and how can they arise from inorganic elements that are not endowed with any intentionality? Only by the law of Darwinian evolution?
- How can a system as sophisticated as the genetic one be created: a coded storage and inheritance mechanism with an intricate regulation of its expression?
- How do you explain morphogenesis in detail? From a genome sequence we cannot predict what the organism will look like (in a “direct” way: DNA is a set of instructions, not a blueprint).
Davies also asks whether nature/life uses purely quantum phenomena such as tunneling or entanglement which open up many more possibilities for information processing, although he leaves the question undeveloped.
He gives three examples related to the olfactory system, the orientation of the birds and the photosynthesis of a bacterium which I find extraordinaries if his operational hypotheses are confirmed.
As I said, there is much more content: discussion about “replication models” like the Game of Life (Conway) or the Universal Constructor (von Neumann) or wonderful details like the functioning of the neurons and the transmission of the signal through the axon not as a small electric current, as I thought, but as a potential wave where a change of polarity is propagated from one axon hole to the other.
He also devotes a section to one of his current research: the possible origin of cancer as an atavistic phenotype. In this hypothesis the cells trigger a primitive subsistence program that makes them reproduce uncontrollably in the face of what they perceive as a threat or abnormal situation in the environment.
Anyway, I hope that these notes will be useful to give an idea of the whole.
Other author references
Broadening the focus on this topic, it’s not the first work where Davies addresses these issues of Information, Matter, and Life. Apart from being the author of various articles and papers, alone or together with other authors (for example, “The Algorithmic Origins of Life” with Sarah Walker), he has been one of the editors of two books that compile essays that address this issue from various perspectives (scientific, philosophical and even theological) such as “Information and the Nature of Reality” (2010) (Davies, Gregersen) and “From Matter to Life” (2017) (Walker, Davies, Ellis).
In the first, in the chapter to which he contributes as author, “Universe from bit” (referring to John A. Wheeler‘s famous 1989 essay where he coined the expression “It from Bit“, which theorized, based on an interpetation of quantum mechanics, about the possibility that information instantiates matter) he already spoke to us of a change of paradigm, where the scheme of explanation:
Mathematics –> Physics –> Information
would become:
Information –> Physical laws –> Matter
Information being the first and fundamental entity from which physical reality is built
Translated with www.DeepL.com/Translator (free version)
What do I think of the book?
In conclusion, I liked the book very much, although due to its speculative character where it points to basic principles to be discovered -which it suggests, outlines, but doesn’t completely define, even at the level of tentative hypothesis-, there are still somewhat inconcrete questions, as if the arguments and explanations were not spun enough.
Sometimes it seems more like a tour presenting – in a very attractive way – the newest ideas and research that there is in this approach where information plays a primordial role. This causes the sensation when you close the book of “wow, here they have found something, it sounds very interesting this line of thought/research” but without you being able to define in more detail what it is about as I said before.
In any case, all the information he gives, the original approach it takes to certain questions, the way it makes us think about issues that are unexplained with our current level of knowledge or the great examples presented make it a very interesting book.
Without some prior knowledge, some parts may be difficult for the reader, but I think that although it’s a demanding read for a popular science book, it’s very entertaining because Davies tries to explain things to as general an audience as possible.
Comments