One may ask how fundamental these measures may be; perhaps one can come up with a better measure for randomness. One of the most surprising results in algorithmic complexity is what is known in mathematics as the "convergence in definitions". This is a phenomenon similar to other types of convergence in definitions such as in the notion of an algorithm in the 1930s when people such as Kurt Gödel, Alonzo Church, Alan Turing, Emile Post, Stephen Kleene, Rózsa Péter, among many others tried to characterise the notion of an algorithm by different independent approaches that turned out all to be equivalent in computational power, giving the sense that the concept of algorithm had been mathematically captured by all these characterisations leading to what is known today as the Church-Turing thesis, that is the strong believe that any practical definition of algorithm will collapse into an equivalent definition to the one that Turing or Church provided. Something similar has happened with algorithmic randomness leading to what Jean-Paul Delahaye, my former PhD thesis advisor, calls the Martin-Löf-Chaitin thesis similar to the Church-Turing thesis. The Martin-Löf-Chaitin thesis is the thesis that all definitions of randomness will be equivalent to one of the previous characterisations. And indeed, when people such as Per Martin-Löf, Greg Chaitin, Andrei Kolmogorov, Leonid Levin, Clauss-Peter Schnorr, among many others independently proposed characterisations of randomness they also found that all these definitions were essentially the same as they were equivalent among each other when the Church-Turing thesis was assumed as almost everybody does in the field also because the definitions of computation appear very robust, and so not only each of those definitions was able to characterise intuitive notions of randomness such as compression, predictability and typicality as discussed before, but they also do so in a very general and comprehensive way. Notice, however, that statistical randomness is not in the list of equivalent definitions of randomness because surprisingly, even when it is pervasive its use, misuse an abuse in science, statistical randomness and measures such as Shannon entropy are approaches that do not provide the accepted mathematical characterisation of randomness. So, we have seen how, according to algorithmic complexity, if an object is random then it is impossible to compress it. We have also seen how compressibility is a sufficient test for non-randomness, that is, if you find a short computer program for some data then you know that the data is not algorithmic random. On the other hand, we also briefly mentioned the concept of lack of particular or special property that we call 'typicality', so we don't call random something that is atypical because it can be described by using that lack of typicality. It turns out that this intuitive concept is thus also related to other intuitive properties of randomness, in particular you can see how something being atypical can be used to compress an object. The basic idea is that if something is not typical then the non-typical feature gives you some sort of handle to pick that object among more typical objects, which contradicts the intuitive idea that it is random and related it to the concept of compression. One can also devise statistical tests for these kind of properties but one first thing is to formalise the kind of allowed properties, such as the so-called recursive properties, those are properties that can be characterised by computer program, which is a generalisation of the properties that can be characterised by traditional statistics given that computer programs can easily capture any statistical regularity with even low computational power such as regular languages but statistics cannot characterise recursive properties. But it was Per Martin-Löf, the Swedish mathematician and student of Kolmogorov himself, that devised a universal test to test a sequence for any recursive or computable property thereby technically achieving another formal characterisation of randomness. As an example of a recursive property it can be whether a sequence has an even number of 1s or whether the digits of a sequence are the digits of a mathematical constant such as the mathematical constant pi that comes from a short computer program implementing one of the many formulas that can generate the digits of pi. So random sequences can then be characterised by failing to meet any property that a computer program can code. Finally, another characterisation from which we started at the beginning in the list of intuitive properties of randomness was that of the unpredictability of a random sequence similar to the characterisation of Shannon entropy. What Klaus Peter Schnorr and others mathematically proved is that it is impossible to make money by guessing the next digits in a truly random sequence when using a recursive or computable betting strategy, and that is something to be expected but if you are using Shannon entropy in practice it will fail because you can simply produce a random sequence with no statistical patterns but generated by a pseudo-random generator and you can predict every digit yet Shannon entropy would suggest is random. However, it is when there are no predictable patterns, statistical or computable, that a sequence can truly be deemed random. This convergence in definitions of mathematical randomness removed from Shannon entropy means that each definition assigns exactly the same randomness as each other. In other words, the extension of each definition is the same; the set that each definition characterises contain exactly the same objects, thereby strongly suggesting that each definition has proven itself to be fundamental in a mathematical way. We can write this elegant result in a compact manner as a causal chain: incompressibility \[LeftRightArrow] unpredictability \[LeftRightArrow] typicality A series of universal results both in the sense of being general and in the sense of Turing-universality leads to the conclusion that the definition of algorithmic randomness is mathematically objective. In summary: - Martin-Löf proves that there is a universal statistical test that can test for all computable properties of an object but is uncomputable or semi-computable. His definition of randomness is therefore general enough to encompass all effective tests for randomness. - Schnorr shows that a predictability approach based in betting strategies leads to another characterisation of randomness, which in turn is equivalent to Martin-Löf randomness.