When humans started thinking about the world, they could not explain most of it and they would attribute everything to magic or the will of moody gods. This is exactly the kind of explanation that is not mechanistic. Non-mechanistic models can come in many forms. Many cultures have attached importance to astronomical events in connection to human and terrestrial affairs, and in some way we continue doing so. Every time we face something unexplainable, some people still attribute it to extraterrestrial intelligence or paranormal phenomena. But the first record of making progress towards finding tools to remove fact from fiction and understand the natural world that supports modern science started with Greek philosophy and the development of some form of logic in arithmetic and geometry, that is still today at the very foundations of modern science. Aristotle was the first to deal with the principles of formal logic in a systematic way. The history of logic is a history of valid inference. In fact, "logos" means reason in Greek among other meanings. Logic, as a formal way of reasoning, revived in the mid 19th century at the beginning of a revolutionary period when the subject developed into a rigorous and formal discipline which took as its paradigm the method of proof used in Greek mathematics. In my personal collection, I have the fortune to have a book that was a founding stone of the field of mathematical geometry. This copy of Euclid's Elements was published in 1573, but it was written around 300 BC, way before the Bible itself. The concept of mathematical proof reached maturity at this point and was systematically used thereafter. The early 13th century witnessed the recovery of Greek philosophy after the Middle Ages and coming back to mechanistic models, one philosopher named William of Occam, born not that far from here where I am in England, and believed to have started right here at the University of Oxford, kept thinking about the problem related to finding arguments to rule out explanations of the type of a Rube Goldberg machine. That is, over-complicated explanations for even the simplest phenomena. We know today his argument as Occam's Razor, also known as the law of parsimony. Occam's Razor establishes that when presented with competing models, one should select the one that makes the fewest assumptions. So in the case of the self-activated napkin drawn by Rube Goldberg, we would simply discard all the mechanisms behind if we had not seen them first hand operating behind, and we would rather favor simpler models as possible explanations. For example, that the person itself or something else is operating the napkin and not an overly complicated machine. In science, Occam's Razor is used as a guiding principle in the development and selection of theoretical models, but we will see that there is good evidence in favor of Occam's Razor. One of my own contributions to the topic is to show that Occam's argument is not only formalized by a concept central to this course, the concept of algorithmic complexity, but also that numerical approximations to algorithmic complexity provides strong evidence in favor of Occam's Razor. We will see this later formally in detail. We will see how algorithmic complexity is connected with the kind of simplicity advocated by Occam. This type of logic is equivalent to a type of algebra known as Boolean logic, due to George Boole. Boolean logic turned out to be especially important for computer science because it fits nicely with a binary system, in which each bit has a value of either 1 or 0 just as in true or false. Boolean logic will be important later in this course as we will be using a type of system called Boolean network. Boolean networks are networks with propositional formula such as the ones on a truth table that process information according to the way which the network is connected. In the truth table shown, for example, there is a formula meaning P and Q and we can P, some symbol Q, where P and Q are propositions that can be false or true, Represented by a letter F or T which can also simply be a 0 or 1. The connective and is called a boolean operator and assigns a value for two propositions telling whether they can be true or false together. For example, for the operator and, only when the two propositions P and Q are true the final formula can be true. Say that proposition P tells that the lights are on and proposition Q tells that there are empty seats. Both the statements may be true or false or if I say that the lights are on and there are empty seats, such assertion can only be true if both P and Q are true at the same time. This is different to the Boolean operator or, where if we claim that the lights are on or there are empty seats, then the proposition denoted by P or Q can be true if any of the two propositions are true even if one is false. For example, if the lights are off, but there are empty seats. Notice that this kind of formulae can be arbitrarily long and so they become less trivial and can implement much more sophisticated situations when combining other operators. It now can be clearly seen how this simple logic can be deeply connected to causality. For example, the formula P or Q can only be true if either P, Q, or both are true and thus the observation that P or Q is true or false tells us something about the truth value of the causes. That is, the truth values of P and Q individually as a cause for its truth value result as formula to be true or false. However, in general math and logic are more interested in truth which in turn is connected to causality but only indirectly because a proof is not an generating mechanism and it cannot be constructive or at least not necessarily. That is, it may be possible or not to write, for example, a program for a computer to carry out the details of the mathematical proof. Like in the case of Peano's famous diagonalization method, which is non constructive. But in the 20th century truth and proof in logic were broken and separated in a very fundamental way and this somehow broke the direct connection to causality. Nevertheless, as we will see later in the course, when we cover some aspects of the theory of computation, this rupture was essential to understand some fundamental properties of the ways in which we can achieve truth and causation by means of pure reasoning, logic, mechanism, and algorithm. Indeed, that logic and proof were disconnected did not mean that we came back to the times of magic and astrology to explain complex phenomena. On the contrary, the development of such areas led to new methods of calculation and the way in which we approach science today. In particular, through the lens of computation, and even more important, led to the concept of universal computation that even makes modules we will explore in detail. But what is important is to understand that actually causality is something very difficult to calculate and this is related to exactly the way in which truth and proof were broken by formal logic.