In this section, we will briefly explain how causality and inferring functions and computer programs is related to complexity. Imagine you see an object with some part of it blocked by another object, in this case a black square. Regularly, if you ask people what this object may be, they will complete the picture and suspect it to be a full circle. In some way, it seems that our minds are hardwired to complete the picture with the simplest shape. We are somehow deeply biased towards simple forms. If behind the black square there is something else, one would certainly be a little surprised. Remember that we had said that before getting into the technical details of the concept of entropy as we will do later, entropy as defined by Shannon is traditionally taken as a measure of surprise. Now, something like classical information theory may be able to describe but not explain the simplicity biased over simple forms by establishing that we tend to favor configurations that surprises us the less, that is that have low entropy. But why is so? My collaborators and I have suggested that such hard wiring for simple things comes from living in a world removed from randomness and thus our minds have evolved with a high content of algorithmic structure. We will discuss this in more detail but this example is meant to show how inference is related to complexity, or rather, to simplicity as opposed to randomness. And even to some form of subjectivity that may be cognitive or perhaps even more fundamental and we will see that the type of randomness we are talking about is not statistical in nature, but algorithmic. So, what is complex? Perhaps a useful way to see how something as complex as opposed to simple is in the way in which one may classify certain human diseases because towards the last module we will be applying all these ideas concepts and tools to areas of application of molecular biology and genetics that are deeply related to human disease and human condition. Most diseases are complex with scientists dealing with challenges related to the observer, the quality, and quantity of measurements, apparent noise, and highly interacting systems with multiple and intertwined causes. Some diseases, such as multiple sclerosis, Alzheimer's, Parkinson's, and most cancers, are very complex in that they can be produced by multiple factors and not only one. They depend on many variables, both genetic and environmental and they are highly unpredictable. In contrast, simple diseases have single or easily identifiable and isolated causes. They may come from punctual genetic mutations, such as a type of breast cancer. The outcome of simple diseases is much easier to predict in that they have well-defined effects. Examples of simple diseases and conditions under this specific definition include cystic fibrosis, Down syndrome, and Huntington's disease under this specific definition. How classification of this type contrasting simple and complex may help us to, for example, treat these diseases and conditions? Well, one first thing that this classification may imply is that one-for-all drugs can only work well for simple diseases. Because having multiple causes and factors, complex diseases will appeal for different reasons in different people. So the current pharmaceutical approach and business model to medicine will fail and new paradigms are necessary. That paradigm is known as Personalized Medicine and the idea is that one should be able to manufacture a specific drug for a specific person. So what scientists aim, or should aim, at is understanding the causes to have better chances to produce these new drugs to steer the way in which a disease develops instead of controlling only its effects. This type of simple versus random behavior can also be modeled mathematically to study ways to understand it better, even if it will be often oversimplified. We will later see in detail how, but just as an example we can use cellular automata again. If we use the response of the elementary cellular automaton with rule number 10, we will find that no matter what the input is, the system supports the input signals into clear stripes that do not interact with each other, and produces always the same qualitative behavior. So it is, in a very simplified way, modeling an aspect of a simple disease. In all four cases the same rule is applied and the same behavior obtained for very different initial conditions. But if you take the behavior of more complicated rules such as rule 22 here depicted, you will find that even small changes in the initial conditions or small perturbations along the way will have a high impact in the final outcome. This behavior is, for example, more similar to the way in which tumors may evolve which often grow and spread in unpredictable ways. In all four cases the same rule (rule 22) was applied but for different initial conditions. The result is highly unpredictable. One of the topics of this course will cover is how to characterize these behavioral differences, especially when the rule is unknown. But we want to understand the system from a hierarchical perspective. That is, asking for the generating mechanism behind when we have no access to a source code. In artificial examples like this one, but also in real-world systems such as biological organisms such as cells, the goal will be not only to attempt to crack the code behind natural phenomena but to try to manipulate and reprogram the way in which these systems evolve and behave. And for that, we will go back and forth between artificial and natural systems in application to biological and cognitive sciences.