So far we have mostly talked about finite sequences or strings but let's now momentarily move to the subfield of algorithmic complexity that studies the algorithmic randomness of infinite objects . One such object is the so-called halting probability number (Chaitin, 1975), also known as Chaitin's Ω (omega) number also often wrongly called a constant, defined as follows: where p is a computer program running on a universal Turing machine U and the sum is a probability between 0 and 1 for computer programs running on U to halt or not. So clearly calling Ω a constant is misleading because it depends on the choice of universal Turing machine U and for different choices there will be a different Ω number. However, the Turing machine has to be somehow special, a type of universal Turing machine that is often called prefix-free Turing machine. Let me explain you what is a prefix-free machine. There is a special type of computer called prefix-free that has to be used for objects such as Chaitin's Ω and is also sometimes used to define algorithmic complexity. Its need stems from the fact that you don't want to trivially count the same program more than once. The problem is that it is very easy to generate an infinite number of programs as an extension of a program. Take the program that prints s and then prints an extra 1 at the end only to delete it before halting. The new program prints s, but it is only a spurious variation of a more compact program, the number of this spurious program to produce the same output is infinite and for a measure to be called a probability the sum of the probabilities has to be 1, it would not have any mathematical meaning to say that something has more than 100% of chances to happen. To circumvent this problem Leonid Levin in 1974 and then Gregory Chaitin in 1977 devised a way to consider only significant programs according to some rule. The rule is that new programs should never be initial subprograms of any other programs. These types of sets are called "prefix-free domains". A classic example is the set of telephone numbers. The only way to reach a person by calling their telephone number is if no telephone number is part of any other telephone number because imagine that Alice's number is 12345 and Bob's telephone number is 123, then no one could reach Alice because as soon as you start typing her number you would be connected to Bob as soon as you finish dialing 123. Different ways to avoid this are possible; for example, telephone numbers are all of the same length so any other phone number of different length is not a valid telephone number, another way is to choose a special character as an indication of a number termination. For example, some online banking systems ask customers to use the # sign to indicate that they have finished introducing their bank account numbers. Prefix codes are guaranteed to exist for a countable set and the so-called Kraft, or Kraft-Chaitin, inequality guarantees that taking the sum of all the probabilities of the series will converge to 1, which is the necessary condition for a probability measure. For algorithmic complexity or Kolmogorov-Chaitin complexity there are versions that can be defined both on regular universal Turing machines and also on prefix-free Turing machines, but they do not differ much from each other and do not dramatically affect the definition of algorithmic complexity. However, it does change more fundamentally this probability measure. Coming back to the definition of Chaitin's Ω every time that a computer program p halts it contributes to the value of Ω by determining a binary sequence that can be seen as the binary expansion of a real number between 0 and 1 because remember this is a probability. For example, in 2007, one of my former PhD co-advisors, Cristian Calude computed the first 43 bits of a Chaitin Ω for a certain universal Turing machine, the first digits of this Ω look as follows: 0.00010000000100001010011101110000111110101 The longer the program length |p|, the smaller contribution to Ω, so the sequence is not only very difficult to calculate for increasing program lengths but it is also uncomputable because we know that because of the Halting problem one can never really know which computer programs will halt, this is why this Ω number is also called the Halting probability. From the formula it can be seen that short programs have the greatest weight in the fraction because the smaller the denominator gets the larger the values will be and so the shortest computer programs contribute the most significant values of Ω. But just as algorithmic complexity, Ω is also semi-computable, meaning that one can estimate it by, for example, fixing a programming language framework and running random programs just as Calude, and later myself, did also in 2007.