Here you have two pictures that I took in the north of Ireland and the north of Scotland. On the left, there is what is called a 'fairy ring' in children's stories. These rings are formed by mushrooms, yet one would be surprised that they organise in almost a perfect circle. How do they communicate this complicated information to place themselves in the right place? And, on your right, we have lava stone hexagons. How do they form? They look like highly structured things that one may have thought wouldn't occur in nature, because they would need a designer that tells the mushrooms, on one side, to distribute in circles, and the stones to form these hexagons. So... how can we recognise that something was artificially designed from something that was naturally made? And, my proposal is - what if we think of this as the result of some computation? And, I will get back to this later. There are two different approaches to the question of - what is life? One is a top-down approach, consisting in modifying living organisms. One can think of eliminating functions in an organism to find the minimal requirements for life to exist. For example, this could be done by finding the smallest set of genes that can produce a cell capable of reproducing and transferring information. These, and many other questions approached by our modifying living systems, belong to the so-called 'synthetic biology' field. The other approach is a bottom-up approach that starts from nonliving things and tries to build up life from scratch. In this approach, the most fundamental, basic properties of life are studied - such as self replication and self-organisation - and this area is sometimes called 'artificial higher life'. Self-replication is a topic that has interested many computer scientists since the beginning of the field. Nils Barricelli and John von Neumann were among the first to explore the question and conduct these kinds of experiments. John von Neumann, just like Richard Feynman - I mentioned before - was one of these extraordinary scientists that enlightened every field they would touch... John von Neumann was also in the Manhattan Project. He was also a physicist who made seminal contributions to quantum mechanics. And, most interestingly for us, he was one of the founders of computer science, and one of the first builders of actual electronic computers - the obvious, like Konrad Zuse in Germany. In fact, most computers today are designed according to von Neumann's first computer blueprint, and they are [said] today to have a 'von Neumann architecture'. Von Neumann was interested in the idea of a machine that was capable of building an exact copy of itself. Here is a picture of this idea - a robotic arm - that takes raw material from a shelf and builds another arm, of exactly the same kind that in turn is capable of building another one, with no need of any other kind of machine in the loop. And, von Neumann, together with Stanislaw Ulam - another Manhattan Project scientist - helped abstract the idea of cell replication and came up with a concept of a cellular automaton. Cellular automata are a sort of machine - very similar to a Turing machine - but instead of updating a cell in the tape at the time, a cellular automaton updates many cells at once and can work on two-dimensional grids instead of linear tapes. And, they are an interesting type of machine to study the concept of cell replication. This is an example of a one-dimensional cellular automaton. Starting from the first row of cells, every cell is updated according to the rule on the top. So, if one starts from two white cells surrounding a black cell - as you can see on the top - the next middle cell will be a black one. And, following this very simple rule, one can generate evolutions like this one. This is called the 'Rule 30' cellular automaton in Wolfram's enumeration, and the rule comes simply from the binary representation of the rule icon. You can see that by fixing the first row, the second row can be taken as a binary number that - in decimal for this case - means simply the number 30. And, this is the same cellular automaton [with] more steps. One can see the type of rich behaviour this simple computer program can generate. This cellular automaton...in particular was found by Stephen Wolfram who, performing an exhaustive experiment, found all sorts of computer programs with many amazing properties - despite their extreme simplicity. By most standards, for example, this Rule 30 automaton is the smallest computer program producing apparent randomness - [bringing] all this complexity that you can see on on your screen. One can ask if these simple rules cannot occur in nature. And, by looking at seashells that develop their shells by layers - very much like a cellular automaton - one can find astonishingly similar patterns. So, von Neumann actually constructed one of these cellular automaton, and this is a two-dimensional cellular automaton that was not only capable of self-replicating itself, but was robust enough to carry out and replicate mutations. So, for example, here is the original cellular automaton that is capable of cellular replication. And, when a mutation is introduced, it starts reproducing the cellular automaton - with a mutation from that time on. You can see from this little flower that was reproduced with code on the tape. This was, of course, a milestone in computer science and artificial life - a fully artificial system, capable of reproducing a fundamental property of life in a computer simulation. Some researchers have implemented von Neumann's self-replication concept for fun, and have made these 3-D printing machines that can print themselves, like these ones in this slide - you can see a parent and a child - from these researchers in the UK. And, one of the most popular cellular automaton is Conway's 'Game of Life'. This is a two-dimensional cellular automaton that has only four very simple rules that one can find all over nature, determining the way organisms coexist. And the rules are: if there is any life, or black cell, with fewer than two black neighbours - then the cell dies by isolation. If a black cell has more than three neighbours, then it dies as if by overcrowding. If a dead cell has exactly three black neighbours, then it becomes a black or life cell - achieved by reproduction. And finally, if there is a black cell with two or three black neighbours - then it lives on to the next generation. With these four rules, this cellular automaton is capable of incredible complexity, showing many properties that we attribute to living systems - like movement, persistence and evolution. You may be seeing on the screen a video of the 'Game of Life'. It turned out that the 'Game of Life' is also capable of self-replication. And, even more, it was found to be Turing universal. There is a property I told you about: the property of being capable of carrying out any possible computation - like your own personal computer. So, in principle, you can run Microsoft 'Windows' or Mac OS on this extremely simple computer, based on rules that mimic rules in the natural world of living systems. It will be very inefficient, but yet you can run any program, in principle, on these kind of systems. Cellular automata can now help us model and understand how things like fairy rings - that we saw at the beginning - are formed. Mushrooms that start growing and reproducing eventually die, and can only reproduce new ones away from the crowded centre that eventually builds these hard circles. When running these simulations, they look very much like the 'Game of Life' because, eventually, the main circle spreads out and shifts the development of new circles - self-reproducing the patterns from reproducing the mushrooms. You may remember the picture I showed you with the fairy rings. So, this is the way to explain it - with cellular automata. And, von Neumann thought that for self-replication, a system would need to have the power of a universal computer. So, here is a question: Can life be defined by universal computation? The answer is not obvious. I personally think that a strong case can be made for connecting life and computation. But, Langton's loop - another cellular automaton that you are seeing on your screen - shows that it is possible to self-replicate without Turing universality. This cellular automaton, called Langton's loop, can self-replicate - but is computationally very simple. you cannot compute much with it. Another thing to consider is that other clearly nonliving systems self-replicate - such as crystals. So, self-replication is likely to be a necessary condition for life - but not a sufficient one. So, what are the requirements for life, if not only self-replication? Is it, having a metabolism that transforms energy from the environment? Or, anything else? There has been a few interesting attempts to define life in the most abstract terms. For example, this author, Joyce, would consider a system a life if it contains more bits than the number that were required to initiate its operation. This definition is interesting but it has some difficulties, because one needs precise definitions of information and adding information to make sense of this. Hopfield also thought that biological systems perform some kind of computation, and that computation makes biology different from physics. And more recently - something I find interesting - Leroy Cronin has proposed that life is anything that can undergo evolution - in the form of survival of the fittest, suggesting that matter that can evolve is alive - perhaps crystals and viruses included - against most biologists' own intuition. And, because I want to connect the question of life to the question of computation, I have to ask whether living and nonliving things can actually compute, or whether they compute anything at all. Here too, there has been interesting approaches - Stephen Wolfram's, for example, suggests that there is a simple and computable rule for the universe. Seth Lloyd, on the other hand, has also suggested that the universe is a computer - but along the lines of Feynman, Lloyd thinks that if the universe is a computer, it has to be a quantum computer. And, there has been some criticism of these views that have arisen, mainly against a related field called 'functionalism', that takes the human mind for some sort of computer. One of the main criticisms is that there seems to be no clear-cut distinction of the meaning of computation, if we consider that things compute - leading to absurd things like David Chalmers' mocking statement that a rock would implement every finite state automaton. That is, that rocks are like computers - something that goes against anybody's intuition. My purpose here is to associate a measure of computation that assigns high computational capabilities to computers, like humans and electronic computers - but low computational capabilities to things, such as rocks or the weather. [ end of audio ]