So in this lecture we will see how we can take advantage of the power of our causal interventionist calculus based on algorithmic information dynamics not only to describe and study objects and find out causes but steer and manipulate systems. Because we are using computable models and algorithmic techniques, we call this type of manipulation ‘reprogramming a system’. One of the greatest challenges in science is that we are always in disadvantage as observers, we can only have a partial view at a single scale and away from isolation. For this reason we always have to deal with what we call noise and with partial representations. Representations come in all flavours in science, we often like to model phenomena with systems such as differential equations but sometimes also as networks and discrete systems. In fact, for almost any representation one can come up with a set of equivalent representations fully describing the same systems but allowing us to see different features of the same phenomena. We have seen how using some measures looking at different features and representation would bias our results, and we have also seen how to attempt to move away from language dependent measures towards more objectives means to analyse data to infer models. We have analysed several dynamical systems and we have seen how to characterise networks, it is now time to move a step forward and see in what ways finding out possible causes for observations can help us steer and manipulate those same systems. By showing this we definitely leave behind more descriptive methods such as traditional statistics that not being able to find models by themselves leave us with little chances to find handles to manipulate systems. However, once knowing the causes, and what produces what, we can start manipulating models and systems to behave in different ways, in particular in ways that we may find useful for our purposes. So in this unit we introduce the concept of reprogrammability from an abstract and mostly theoretical point of view but in the next unit you will see how this can be applied to real-world systems, in particular biological cells, for different purposes. So here we have some of our most recent results in this area. We show how to reprogram a cellular automaton to emulate any other cellular automaton. The way we did this was by way of what is called a compiler in computer science. A compiler is a small computer program that does nothing but translate between 2 different languages or systems. Is a real-time interpreter that allows you to transfer data from one system to another on such a way that both systems can communicate between each other. So what we did was to take a random cellular automaton and we would find a compiler based on exhaustive exploration and hacking tricks in order to find the right initial condition for the cellular automaton to simulate some other cellular automaton. By performing this exhaustively we were able to break some boundaries imposed by Wolfram’s classes. You may remember from previous lectures that Wolfram’s classes divide systems, in particular Elementary Cellular Automata into 4 classes of different qualitative behaviour, so classes 1 and 2, for example, are classes of systems that display very simple behaviour. Then class 4 are systems that produce random-looking behaviour and class 3 systems display structure, a combination of simple features and random-looking. However, what we were able to show, is that we were able to reprogram instances in every one of the Wolfram classes and make them behave as cellular automata from all other Wolfram classes. Here we have an example of a cellular automaton emulating other by using a compiler. The compiler is only used once on every computation, at the beginning mediating between an initial condition and the normal computation of the emulating cellular automaton. Then the compiler transforms that initial condition into the instructions for the emulating cellular automaton to emulate another one. On the right you have emulating and emulated cellular automata examples illustrating how we were able to reprogram cellular automata to behave as any other providing evidence that computer programs are incredibly reprogrammable. With all this information, we were able to reconstruct emulating networks, two cellular automata would be connected by directed edge if one cellular automata was able to emulate the other. We found many interesting properties in such evolution networks, including ways in which we were able to find long chains of emulations and also rule hubs that were more reprogrammable than others or more prone to be emulated by others in some way reestablishing the Wolfram classes but more in a pragmatic way rather than fundamental. More striking is the number of cellular automata able to emulate other cellular automata of very different qualitative computational behaviour. What we found is that, the more cellular automata we explored the more reprogrammable they were in terms of how many other cellular automata of their own size in number of colours they were able to simulate in comparison to those that were not reprogrammable at all. This kind of reprogrammability is actually a stronger version of Turing universality and is called intrinsic Turing universality, so finding that more and more computer programs are reprogrammable in this way we also found that our results seem to suggest that Turing-universality is pervasive. So equipped with all these tools we were actually able to find several new universal cellular automata, some of them quite interesting. These examples illustrate 2 cellular automata that are the result of combining several Elementary Cellular Automata. In these figures each of these cellular automata are running a rule 110 computation and we have shown that they are able to emulate rule 110 therefore are Turing-universal themselves because rule 110 is Turing universal. These are actually possibly the only new universality results involving Elementary Cellular Automata that have been found in the almost last 2 decades. Remember that Turing universality means that a system can carry out any computable function and that means that, for example, these extremely simple cellular automata can emulate things like your entire Microsoft Windows computer, even if certainly very slow because they have very little resources at any given time so they would need to do all sort of tricks to re-encode the computation but yet they would be able to do so. So one fundamental question to answer in connection to algorithmic dynamics, is how inducing perturbations to a system in terms of their contribution to their underlying models, would actually change the dynamics of that system. Performing perturbations at the level of the system to change the possible generating model is equivalent to simulating a change in the original model which can then run for longer to compare its output with the original system output and see the differences. So we did a first simple experiment, we took all possible random Boolean networks with up to 5 nodes which as you known are many because their number grows super exponentially yet because they are small we could explore all them and extract some statistical significant results. So we calculated which perturbations would be positive, negative and neutral as defined in the previous lessons, that is which perturbations would have an effect to the original system model that would force it to incorporate new elements to account for the new changes at the system’s level. And what we found was also something that we were expecting and can turn out to be of extreme help, that moving systems on the algorithmic complexity landscape has a predictive effect on the dynamical landscape of the system, that is in the phase space of a dynamical system. So from the random Boolean networks experiments we were able to produce the results on figure d. The small difference between cases small but significant because the number of attractors in such small graphs is tightly bounded but given the number of cases and the average difference the results are consistent with what we also found on larger networks. In figure (e, f and g), for example, numerical calculations of the change in number of attractors in simple directed complete graphs, ER and scale-free Boolean networks shown very consistent results. In a complete graph there are only a few possible attractors because everything is connected and it only depends on the direction of the edges and the node functions, but only negative perturbations produced more attractors, positive and neutral perturbations preserved the number, as we had expected. In the results for 20 aggregated results from Erdos-Renyi random networks the separation was even more clear and consistent with the exhaustive experiment with small Boolean networks. Negative perturbations, that is those that move the network to greater algorithmic randomness increase the number of attractors, while positive reduce them and neutral preserve them. Scale-free networks, like regular networks, were found to be more resilient to perturbations but again only negative perturbations were able to increase the number of attractors. Notice that there are very few alternatives to doing something like this to control and manipulate the dynamics of networks. There is an area of active research on network controllability but all of them involve assuming distributions, having access to the precise node functions which most of the time we don’t such as in the case of neurons and genes, and they also tend to assume that all systems are linear or near equilibrium which they likely are not in the real world, but none are necessary in this approach connecting the algorithmic information space and the dynamical phase space. From these first principles, where systems which are far from random, displaying an inherent regular structure, have relatively deeper attractors and are thus more robust in the face of random perturbations, we can derived a (re)programmability index according to which algorithmic causal perturbations of network elements pushing the system towards or away from algorithmic randomness reveal qualitative changes in the attractor landscape even in the absence of a dynamical model of the system. For example, depending on the original complexity of a network, negative perturbations will increase the number of attractors and also make them more shallow, significantly changing the dynamics of the system. A network is thus more (re)programmable if its elements can freely move the network towards and away from randomness thereby also changing its dynamics. Also notice the asymmetry of the programmability curve, because for random edge perturbation, simple networks are more sensitive than random networks, and this is some sort of thermodynamic-like phenomenon connecting computing back to statistical mechanics in a very interesting way. This reprogrammability index assigns low values to simple and random systems and high values only to systems with non-trivial structures, and thus constitute what are known as a measure of sophistication that tells apart random and simple cases from highly structured, in this case quantifying the algorithmic plasticity and resilience of a system in the face of perturbations. This plot shows what we call the (re)programmability space defined by the Cartesian product of two programmability measures. One is relative (re)programmability Pr(G) that takes into account the sign of the different segments of the information signature of G that gives us the number of elements that can send a network towards or away from randomness and how much of those elements in the form of its information signature are below or above zero. The other measure, absolute (re)programmability PA(G) accounts for the shape of the signature σ(G). Both of these indices contribute to the information about the (re)programmability capabilities of a system such as a network. The combined version is, effectively, a weighted index between the two (re)programmability measures that maximizes certainty measured by the magnitude of the vectors, where the closer to (1,1) or (re)programmability vector of magnitude √2, the greater the certainty of G being (re)programmable. So in the next unit we will see how we can apply all these ideas from this unit to applications in behavioural, evolutionary and molecular biology, in particular to cognition and genetics.