Complexity Explorer Santa Few Institute

Lecture: Epistemological emergence

Lead instructor:

Your progress is not being saved! Enroll now or log in to track your progress or submit homework.

1.1 Lecture » English Transcript

My name is Miguel Fuentes and today I want to talk to you about emergence.

I will try to connect philosophical ideas about emergence with ideas that come from complexity science. In particular, I am going to connect emergence to metrics from complexity science using concepts that come from the Santa Fe Institute, principally from Murray Gell Mann’s Effective Complexity. 

To begin, I would like to cite the first article, which is relatively modern, that mentions the concept of emergence. This is the work of Lewes from 1875. From there, we can, of course, mention that there are other earlier works, which came well before his. But, he coined this term “emergence” in his work, Problems of Life and Mind.

From that point on, there is a rather heated discussion about emergent processes which essentially refers to the appearance of a phenomenon that does not have, in principle, a way to be connected to the basic processes where the phenomenon occurs. Remember that this is taking place before theories like quantum mechanics.

It was clear that such processes existed and there was also notable progress in the early and mid 1900s in nonlinear science. So, all this was happening at a time when there were these processes that could not be explained in a totally accurate way in terms of what are called theories of first principles.

The emergent phenomena to which I am referring are enormous in number. We can refer to patterns; the occurrence of patterns in nature in general. You can think about patterns in animals, for example, the patterns you can find on the skin of animals, patterns of animal movement, flocks of birds, which nowadays are quite famous, and also social processes. Today, we can also refer to global phenomena such as climate change. Remember that these are phenomena that we cannot connect in a one-to-one way with any basic process where the phenomena occur. 

So, the discussion of emergent processes began in philosophy in a very important way, as I mentioned in Lewes’ work. And, of course, also after. We can distinguish several discussions regarding emergence. Some are strong emergence, weak emergence, ontological emergence, epistemological emergence and I could say, etc. 

What I am going to discuss today, and what interests me in particular, is epistemological emergence. In particular, when I am referring to epistemology and specifically to epistemological emergence, I am referring to what accounts for or what connects the observer with the natural world and how difficult it is for the observer to make an account of the natural world or to know the natural world using tools. In particular, from this point of view, scientific tools. Of course, the discussion of epistemology goes beyond this, but what I want to present today is framed more or less in those terms. 

The epistemological process, then, is the search to understand a phenomenon and not only to understand it, but to be able to describe the different interactions the phenomenon may have with its environment and to anticipate new solutions that the system may present. That is to say, in the end, to predict.

What we are going to try to tackle is how we can try to understand a phenomenon from a quantitative point of view. And to do that we are going to start by first recalling certain metrics that have come from Complexity Science. 

And in particular, one very, very important metric comes from Kolmogorov in his work, Three approaches to the quantitative definition of information, which was published in 1965, and what he does is he somehow generates a metric of complexity that is based on algorithms, or based on an algorithm. Recall that Kolmogorov’s work takes place years after Shannon’s, which provides a statistical method for measuring information. Shannon’s work is from 1948, if I am not mistaken, and it is titled A Mathematical Theory of Communication. 

So how can we approach the problem of emergence from a quantitative point of view? Now we know, we can use metrics from complexity science. 

In Kolmogorov’s measure, the interesting thing, and I’ll repeat it again because it’s quite interesting, his concept is to try to generate a measure of information by generating an algorithm. Let’s discuss a bit more what I mean when I say that. I am going to try to give some not too technical definitions, that I hope will give you an idea of the fundamental ideas of Kolmogorov.

So, given an object or a phenomenon, I can create a vector which describes it, which is the Kolmogorov measure of complexity. This refers to the shortest, most compact string of code that can reproduce the object or phenomenon. In this way, you can see that if the object that I want to reproduce or describe is, and I am going to use the word, very complex, meaning, it has many parts that I want to describe or it has some subtleties that I want to refer to the string of code will be quite long. So, Kolmogorov would assign a high level of complexity to this object. 

The problem with Kolmogorov’s measure is in some examples his idea of complexity falls apart. One of those, one of the most common, is to imagine that there is a vector with completely random numbers. When there is a vector with totally random numbers, if I wanted to describe this phenomenon using the smallest string of code as a measure, I would find that the vector that describes it will be exactly the same string of random numbers. So, for a vector that is totally random, and we can say, in some circumstances, even trivial ones, the Kolmogorov measure is going to be very high. 

Clearly the Kolmogorov measure is useful for some phenomena or characteristics, for some entities that I wish to describe. But in other cases, like this one, as trivial as a vector of random numbers, the measure no longer carries the meaning we could say of the word complexity. 

Note that in some way for the description of the phenomenon, and here I am going to use the words in a way that is not is not so, so rigid, because there are discussions about it. I will use the words theory and model interchangeably, although we could cite debates on the subject. There are debates, even now, on the distinction between a theory and a model. 

So, we were with Kolmogorov and we commented on the problem with Kolmogorov’s measure. 

Note also then, and this is why I was referring to the problem of the theory and the model that is being used to describe the phenomenon using a line of code, there has to be some kind of theory that supports that type of algorithm being used to describe the phenomenon. So, there we started to go through the epistemology of the problem around emergent phenomena. 

Murray Gell-Mann, at the time, around 1994, wrote a book for the public in which he discussed some ideas that led him to define in a nontechnical manner in that text Effective Complexity. What Murray Gell-Mann did was a very interesting step, I would say, and quite innovative, because again note the timing between Shannon’s work, then Kolmogorov’s work, and finally Gell-Mann’s. Many years went by – decades. Gell-Mann made a very strategic move in the sense of getting rid of the problems that Kolmogorov’s metric had. 

So, let’s go back to Kolmogorov’s metric a bit and let’s recount how Gell-Mann passed over the problem in a very intelligent and interesting way. What we had in Kolmogorov’s measure is that we had to describe the entire vectorof the phenomenon or entity that we were discussing. What Gell-Mann did was to be interested in the regularities of the phenomenon.

This word is very important because it will connect to something else that we will continue discussing in another video soon about the way in which science takes a phenomenon and tries to describe it. 

So, Gell-Mann says, if I have a phenomenon or an entity that I want to describe I can use a vector to describe that entity or phenomenon, along with the data that comes from it. Then what I am going to try to describe, not going to try but what I am going to describe, are going to be the regularities of that phenomenon. I am going to leave that aside for now; then we will discuss what I can do with that. I am not interested in describing the fluctuations or the noise.

So, suppose we have a phenomenon in the physical chemical domain, for example, which has some pattern. I am referring back in a not so technical way, but you understand what a pattern is. A pattern is something that has some kind of regularity and of course, can have some type of fluctuation. 

What Gell-Mann tells us is, let’s take into account the regularities and and let’s describe only the regularities. In this way, what Gell-Mann is going to do is to pass over Kolmogorov’s problem. He is going to leave aside all the fluctuations, or what we believe to be totally incidental. This is something that Gell-Mann also introduces. And we will describe only the package of regularities. 

And this is what Gell-Mann calls Effective Complexity, making a distinction between the regular and the accidental. Gell-Mann gives us the opportunity to in some way have a metric which describes phenomena that will now allow us to say that some are more complex than others. 

How does this work?

Because the regularities that we want to describe will have higher complexity and the compact string of code that describes them will also have higher complexity. So, how now, having this whole set of information, can we connect it to the problem we began with in some way?

We can say that it began with a heated discussion in the philosophical field, which in some way also reached, of course, the field of natural sciences with these metrics that come from complexity science. 

The idea that I propose, which clearly comes from the ideas of Effective Complexity is the following: if we want to describe a phenomenon and to give it the characteristics of emergence, we first have to state again that we cannot make a one-to-one connection between what happens on a macroscopic level, let’s say, to the level where we are seeing the phenomenon or to a basal or microscopic level, where the interactions of certain elements are generating it. 

Notice that we now have a metric that allows us to be able to say if we have the compact algorithm, which can account for the phenomenon from first principles even though I observed the phenomenon at a higher level, we will be able to describe in some way how emergent the phenomenon is. 

To give a simple example, suppose that we have a pattern that comes from Turing type equations, which we will put in the references. This bifurcation, a Turing bifurcation happens when, basically, when we have a process and are using a model, we have two diffusions one of a chemical that is going to be called activator and one that is going to be called inhibitor. And we have two diffusions, which are different but somehow that interaction naturally produces Turing patterns. 

What I am going to argue is that in these types of cases epistemological emergence does not exist, because in some way what I will have is the same model that is going to describe the phenomenon before the Turing bifurcation, where there are no patterns, and after the Turing bifurcation, when there are patterns. It is from this point of view that I am referring to that basically there is very little margin left to say that a process is emergent. 

From my point of view, there are some places where this discussion is still, again this is from my point of view, I want to make that clear, there are still some points at which the discussion can generally stay associated with philosophy of mind and works that in that field.

On the other hand, interestingly I also think there could be, at that moment, some type of equally important discussion about the emergent properties that occur in artificial intelligence processes, where the decisions of artificially intelligent entities are difficult for us to understand and we still need quite a bit of work to understand what is producing that solution. 

So, to recapitulate and not to make the talk too long, we have an interesting, and also historically interesting discussion, on emergent processes. Again, today’s talk is about epistemological emergence. And on the other hand, we have a very convincing work from the middle of the last century, with works such as Shannon’s, and, of course, works like Turing’s. And then, I would say, a contribution which is crucial to later discussion from Murray Gell-Mann and Kolmogorov in his work, where he describes Kolmogorov Complexity in an algorithmic way. 

Then, the fundamental step of Murray Gell-Mann, which is, really, I find it very subtle, but the subtlety is very beautiful, in which he is able to unravel the problems that the equation had in Kolmogorov’s measure. And then we see how these two worlds, which seem somehow quite far apart, connect in a natural way. 

When one speaks of epistemological emergence and uses Gell-Mann’s tools, one can bring everything to a place where epistemological emergence today, from this point of view, would not have much importance given the scope of the theories and models that we use. Note then, that when we are speaking of epistemological emergence in these terms, we have markedly reduced its field of action seeing as we are using continuous models. 

For example, in the case of Turing equations or discrete models, or even algorithms, in the case of agent-simulation. We are able to describe a phenomenon using a rather compressed encoding of information, passing by places like in the phase space of the system, where there is no emergent property of the system, to where the emergent property does exist. This happens in a very natural way and without algorithmic cost. 

The point I want to stress is that there are some spaces left, in my point of view, where the philosophical discussion of epistemological emergence can be quite interesting. 

In particular, I see two places where this can occur one in philosophy of mind and the other in contemporary topics that are very interesting to work on related to artificial intelligence, where decision-making in artificial intelligence systems can somehow lead to a slightly deeper, more robust discussion, about aspects that concern epistemological emergence. And if we can ever somehow get around the problem of artificial intelligence to be able to have a complete understanding of the decision-making process. 

I want to emphasize that epistemological emergence will also be connected in future talks with the process of scientific progress. Why is this the case? Basically, my understanding is that logically the scientific frontier is always moving and new challenges continually appear for scientists, which are often difficult to deal with using the theories in use at the time.

So, notice that there is a rather interesting connection, perhaps subtle, between what we have just discussed and the evolution of scientific theories. To use a measure or to use the concept of a measure like Kolmogorov’s or Murray Gell-Mann’s Effective Complexity we need to have a theory, or a model that provides support for our code, our algorithmic way of understanding the new phenomenon. 

So, clearly there is going to be tension regarding how much I add to the models I am using, what I’m going to be doing is adding information to this code. Each time, I am going to increase the complexity of my theory, of my model, that I am using to explain the phenomenon, and I am going to try to take in new information that I might have discarded as accidental in the past. This is, without a doubt, the connection between what we have been discussing concerning theories of complexity and the evolution or the decision-making process in how a new theory is accepted or not in the scientific community.

Basically, what I’m going to argue along with Murray Gell-Mann, who had already stated this in his book from 1994. I am going to argue that in his decision-making process there is a tension between how much I have to add to the model to take care of the new challenges that are appearing in science and how much, of course, I take as accidental. Notice then that there is a connection here. There is a thread in the narrative that takes us back to the beginning of, or a little before 1900, with Lewes’ work. We connect this with work that has to do with information metrics. 

We have discussed the importance of epistemological emergence, where we have made it clear that using these metrics, epistemological emergence may have to be redefined and may be in a weakened position. And then we go one step further, and we can then connect these ideas with the evolution of scientific theories. 

And we can understand, perhaps, and this is my point of view, that in moments of crisis, and in moments of change, I will use Kuhn’s ideas, paradigm shifts, one of the classic ideas of Kuhn. The idea that how a new theory can reduce complexity is very important. Note that, at a time of paradigm shift what usually happens is that the old theory cannot cope with the new empirical data. So, to adjust the theory I need to make many tweaks. A rather interesting case occurred with the addition of epicycles in early cosmological theories. 

To somehow connect the idea of having low complexity when a new theory appears that reduces the code, that is able to replicate all the data, the old data and the new data that is appearing. It is this type of argumentation that clearly has a particularly interesting historical thread. It what motivates me to continue working, researching in this field. 

I hope you have enjoyed this discussion. And of course, we will continue working, researching and your thoughts are very welcome.