A theory can be very useful because it introduces the necessary objectivity but keeps some necessary subjectivity for my purposes. This here is the theory of algorithmic information, also known as "Kolmogorov complexity." Basically, Kolmogorov complexity of a string is the length in bits of the smallest computer program that produces the string. So, if someone asks you which of these strings on your screen look more or less random, with traditional probability theory, you wouldn't be justified in saying that the string at letter "b" is the least random - because any of the three strings would have exactly the same probability being produced by a uniformly random process, generating the strings at the same length. But, with Kolmogorov complexity, you can find a short program that produces the non-random-looking string to justify its non-random character. For example, this program produces the string of alternating zeros and ones, and... it can produce a string arbitrarily long, with the same pattern - without having to increase the program size itself. This means that this string of alternating zeros and ones is of low random complexity. In other words, this program is the compressed version of an arbitrarily long string containing this alternating pattern. Before coming back to Kolmogorov complexity, let me introduce Wolfram's behavioral classes. Stephen Wolfram found that cellular automata and other systems can be basically classified into four behavioral classes. Class I, for example, are systems that evolve into homogeneous states or simple behavior. Class II develop into periodic states, such as fractals and crystal-like objects. Class III systems are random-looking ones. Class IV display persistent structures over time, with parts looking random and other parts looking simple. Here are some examples of cellular automata that display different complexity. You can see the evolution of cellular automata for some number of steps, and their compressed and uncompressed lengths plotted next to the cellular automaton evolution. You can see that the evolution of Class III cellular automata, like Rule 30 that you can see on your screen, looks very much random. It's very hard to compress - while simple rules are very easy to compress. However, notice that if something is hard to compress with an algorithm, it doesn't mean that it cannot be compressed. This is why the compressibility approach - based on Kolmogorov complexity - retains some subjectivity that I find useful to classify systems. The subjectivity of Kolmogorov complexity actually comes from the fact that the measure is... [uses] a fancy name - semicomputable. That means that no Turing machine can compute the length of the shortest program for a given string in general. So, only approximations or upper bounds, to be more precise, are possible. But, this is a desirable property of a measure of complexity, because it is because of its power that it turns [out] to be semicomputable, and it is this thing that we call "semicomputability" - that compression is a sufficient test for nonrandomness. That means if something is compressible then for sure it has low complexity, but not the other way around. And this slide shows that by looking at the compressed lengths of evolution one can classify the behavior of these cellular automata. This is very similar, from my perspective, to another important contribution of Alan Turing to science - that is, Turing's imitation game, also known as the "Turing test." In the original Turing test, the test consists in determining whether behind the screen there is a human or a computer answering a question. Turing used this approach to give an answer to the question of whether computers could be intelligent - the question that actually gave birth to the field of artificial intelligence. Turing replaced the question with a test and claimed that if a computer should take the Turing test - that is, that it would fool a human to make it believe that at the other end there is a human instead of a computer - then the computer should be declared intelligent. In my approach to computation, the compression algorithm can be seen as an interrogator of the programming capabilities of a system - where the questions are initial conditions to the programs and the answers are lengths of the compressed... answers to these external stimuli. If the source is somehow programmable, then one should be able to declare that source to compute and to be some sort of computer. But, unlike in Turing's test, where things are black or white or intelligent or not, I would like a graded measure of computation - a measure indicating whether I can actually program something to behave like a computer program. So, I can come up with a measure based on this compressibility that can be consulted in any of my papers on this subject, if you go to my webpage. So, this approach can deal with situations like Chalmers' rock, where I can assign a very low computing degree indicating in agreement with our intuition that a rock is not like a computer, because it is very difficult, if not impossible, to program anything on a rock - to carry out any computation, if that makes any sense actually to think of programming a rock. And, since computers such as the "Game of Life" or this Wolfram "Rule 110" that you are looking at on your screen that have proven to be Turing universal have large programmability scores in agreement [with] the knowledge that we have about this cellular automaton - unlike this other one that is another cell automaton that behaves very simply and actually is very difficult to reprogram, or to do any computation on it other than very simple things. And, the measure can be applied to other [entities] than computers by measuring this programmability index of physical and biological systems. Here is a sketch of how this would work, sorting various systems by variability and controllability, which are the two orthogonal properties that I think are fundamental for computing. For example, for weather and Brownian motion that have great variability, they are very hard to control. And, on the other extreme, rocks that have a lower variability and are therefore trivially controllable are not in the programmability diagonal. So, in the diagonal view you see living organisms and another programmable systems. And, in this way I think the question of life and how to recognize natural and artificial intelligence, and also natural and artificial life, is deeply connected to the question of how to recognize computation - and whether things can be reprogrammed or not. After all this, you may ask what am I doing in one of the best medical universities - on Karolinska Institutet? Well, one can use... because I'm a computer scientist, by the way. Well, one can use and exploit all these ideas to reprogram living systems, to make them do things at will. In the end, this is the whole idea of programming something. Here is an example of finding the right input for phorphyrin molecules to make them behave in different ways. That is what it means to program something: to find the inputs for the desired output. It is all about mapping the parameter space to the output space. In this case, we call it the "conformational space" of these molecules. The conformational space is just a space of possible shapes that these molecules can form - and that is interesting because in biology, shape is function. This is a very powerful idea because one can program molecules to do things like carrying out a payload to be released when some conditions are met - releasing, for example, a chemical that may be used for biological markers or even to fight diseases. As a proof of concept, we have applied these programmability measures to a large set of biological simulations from a world-known database of biological models called "BioModels." We have found that the programmability measures are sensitive to the robustness of different models, apparently distinguishing between models that are related to cellular versus metabolic processes. Here are some plots of the programmability trajectories of different models, some showing low variability while others have high variability but different degrees of controllability. [ end of audio ]