In this lecture we will see how algorithmic probability can also shed light on certain aspects of biological evolution and how numerical results from the natural case can be translated to optimization methods for artificial evolution in techniques such as genetic and evolutionary algorithms. Central to modern biological evolutionary theory is the understanding that evolution is gradual and is explained by some small genetic changes in large populations over time. Genetic variation is commonly thought to arise by chance, through mutation, with small changes leading to major evolutionary changes over time when such changes provide some evolutionary advantage. Often of interest in connection to the possible links between the theory of biological evolution and the theory of information is the place and role of randomness in the process that provides the variety necessary to allow organisms to change and adapt. Natural selection explains how life has evolved over millions of years from more primitive forms. The speed at which this happens, however, has sometimes defied formal explanations when based on random, uniformly distributed mutations. The common creationist argument, for example, is that there is no way to achieve the level of design of an organism by accumulation of random modifications. But if one considers that mutations are perturbations from the environment, then it is clear that they may not distribute uniformly and not be completely random. So not all mutations would be equally probable. In fact, this is already known in biology. Not all DNA segments or regions in the genome suffer the same rate of mutation. And even in a person's lifetime, mutation rates are very different. So to explore mutation distributions different from uniform distributions should not be or come as a surprise. It is not difficult to see how organisms interacting with each other is some sort of process that may be related to the universal distribution if DNA is thought as living in software space, with mutations actually algorithmic perturbations something previously suggested by people such as Gregory Chaitin himself and his metabiology and also by Stephen Wolfram. What if we assumed that mutations are not uniformly distributed but distributed with a strong algorithmic simplicity bias if DNA is in this software space? What would happen if we study evolution from that point of view? Here you have the way in which we performed an experiment following a pipeline in which we introduced the universal distribution into the way in which mutation changes software such as DNA. So before applying a mutation, we simulate every possible single-point mutation and find which one would be the most simple, according to algorithmic probability measured by BDM. and then that would be the mutation that we keep to the next generation. Of course in reality, natural evolution mutations may not apply in this way, but because mutations would not be drawn from a uniform distribution, they would come from something related to the universal distribution so they don't have to be simulated and tested beforehand. They are naturally favoured. Results both on synthetic and on small biological examples indicate an accelerated rate of convergence, evolutionary convergence, when mutations are not statistically uniform but algorithmically uniform, that is, mutations distributed according to the universal distribution. We have also shown that algorithmic distributions can evolve modularity and genetic memory because some structures of low complexity will be preserved and carried over time, creating batches of lower randomness, sometimes living to an accelerated production of diversity, but also to population extinctions, because by following the universal distribution, low complexity modules are unable to adapt fast enough, thereby possibly explaining naturally occurring phenomena such as diversity explosions as it happened in, for example, the Cambrian period. And massive extinctions such as the end Triassic, whose origins are currently a cause for debate. Here's an example of this speedup phenomena with mutations following a universal distribution when evolving a graph that is chasing a non-random growing graph such as a Zay-Kay graph that we saw in the last unit and several before. But also the simple structures like kauri trees and stars. Introducing a simplicity bias is thus translated into a faster evolutionary convergence. Graphs require way less number of steps to reach those graphs when applying algorithmic mutation than when applying uniform distributed mutations. Furthermore, introducing this bias translates into being able to find differentiated evolutionary pathways with differing weights among genes, which turn out to be commonly associated to cancer in a well-known biological network. This is because now not all evolutionary paths have the same weight or have the same probability. Some will be more likely according to the universal distribution mutations than others. So the natural approach that we have introduced appears to be a good approximation to what we observe in biological evolution than models based exclusively on random uniform mutations, for various reasons. And it's also natural because those random mutations are known to be not completely random but the effect of different causes, which, if they are thought to the algorithmic in a new way, then they should obey algorithmic probability and be related to the universal distribution. The results also validate suggestions in the direction that computation may be an equally important driver of evolution. We also show that inducing the method and problems of optimization, such as genetic algorithms, our approach has the potential to accelerate convergence in artificial evolutionary algorithms. Finally, to show that these results could be translated back into the way we solve problems by algorithmic means, we tested whether we could speed up evolutionary programming in specific examples using, for example, a genetic algorithm, using the same ideas, that is, drawing mutations from universal distribution instead of uniform. What we found is that for classical benchmark experiments convergence results using genetic algorithms were significantly sped up, thereby strengthening what we also think underlies nature itself, that it doesn't draw mutation from uniform distributions. So the concept behind is that if you're trying to approximate something that is far from random, which organisms under environments are, then mutations biased toward algorithmic simplicity will give you an edge. And it is very likely that this mechanism is in operation for natural algorithmic causes and the way in which evolution drives biological creativity. Our results also provide a formal framework and a version of open-ended evolution based on these ideas. Open-ended evolution is somehow the idea that evolution never ends and has no objective, so it keeps changing over time as it is necessary. Here is some bibliography about this subject from our group. I am sure you would have never guessed how, for example, the BCB (ph) which I think is a mega number would be connected whatsoever to biological evolution, but this is just a taste of how we have connected these apparently disparate ideas into a very beautiful and elegant theory. In the next unit we will see an application of logical depth into image classification.