In this lecture, we will see how we can apply algorithmic information dynamics to integer sequences and behavioural sequences, and we will introduce you to what we call 'algorithmic cognition'. So, if we want to work with integer sequences, one obvious experiment is to see whether we can correctly characterise the hundreds of thousands of sequences in a very popular database, called the 'On-line Encyclopedia of Integer Sequences', populated by people and based on some recursive algorithms that generate the sequences. Many more experiments can be done, and we have many open projects that you can undertake to apply algorithmic information dynamics to these sequences. But here we did a couple of very interesting experiments, with very positive results. I have myself introduced a few entries in the database. I contributed with about seven sequences that were not already in the database, based on different algorithmic processes. One of them is precisely the progression of number of Turing machines used for the CTM method, for example. With so many sequences in the database, it is today [somewhat] difficult to produce new ones interesting enough to have them introduced. But many numbers of these sequences make it to the database, and it is a great dataset for testing purposes. So, in the database is not only the sequence, but also the code in one or more computer languages and the description of those sequences. What we found was very interesting: that the textual description length, as derived from the database or extracted from the database, is - as illustrated on the left - best correlated with BDM than [with] any other measure, such as Shannon entropy or lossless compression. And, we also found that we were able to show - see this on the right - that BDM performed best when comparing the length of the program, also extracted from the database that generates each sequence, and the BDM values of the integer sequences also better than the other two measures. And this is exactly what we were expecting, because it means that our estimations of algorithmic complexity - by way of algorithmic probability, estimated by BDM - are truly capturing something else that other measures cannot. And what they are capturing is exactly in the right direction, because it is correlated with the description length of the algorithm generated in the sequence, or the length...in bits of the algorithm precisely producing those sequences. So, one great source to understand those experiments is, of course, the original articles. But we can move now to another type of real-world sequences - what are known as 'behavioural sequences'. And, in particular, experiments that we conducted with sequences coming from different sources, sorted from cognitive-simple to more cognitive-sophisticated organisms. So, first is a landmark experiment - even if sometimes controversial, conducted with so-called redwood... and blood ants - conducted by Russian researchers several decades ago and then redone almost every decade again. The idea was to place food in the leaves of a binary tree that serves as a maze for forager ants to find that food. Once they find the food, they come back to communicate the location of the food to other ants in the colony. All aspects of the experiment were controlled, for example, to avoid pheromone traces in the way. And we won't spend time explaining the details because you can go to the original sources - you can go directly to both their papers and also ours. So, the hypothesis to test was: whether the forager ants would take longer to communicate information about the location of the food, when the food was placed in locations that require a more complicated number of turns, in contrast to having placed the food in, for example, the leftmost branch in all cases. So, at every bifurcation, the ant can choose left or right, that can be encoded in binary by, for example, a letter such as L or R, standing for left and right. Then, an easiest path to the food could be L-L-L-L in this example on the screen, which means: 'take always the left branch and you will find the food'. And that is a very short description. In contrast, something like L-R-L-L is slightly more complicated, but perhaps not as much [as] L-R-L-R. So, if the hypothesis is right, then one could say that ants are somehow able to compress some of the instructions to transmit them to other ants, because they find that there are some simple ways to describe something that is intrinsically simple. So, the researchers measured the times that ants would take to accomplish this communication task, and they noticed that, indeed, there was a correlation between string simplicity and communication time. But they were unable to precisely quantify this. And, it was not until we applied our methods that can deal with short sequences, and both sensitive and specific enough, that the suspicion could be quantified. Here we can see that by using CTM - BDM is here not necessary because the string length is less than 13... So we can use directly CTM only. The only measures able to quantify and confirm the original paper hypothesis [are] both the approximations of algorithmic complexity and of logical depth using CTM. This is another more recent experiment with perhaps one of the insects with most simple behaviour - a fruit fly. This experiment is much more sophisticated, and to understand it in detail, you must again read the original paper. But let me try to explain briefly how we were able to test the CTM and BDM on this dataset and experiment. So, it has been long believed that flies are very simple - almost like robots - in that they only react automatically to external stimuli without engaging in [many] cognitive processes. So, the experiment consisted in placing a fruit fly inside a cylinder. The fly is attached to a torque that detects the fly's turning direction - with the fly actually not moving at all, because it is connected to a brain signal detector through diodes in its head. So, the cylinder emulates three situations, one in which there are no patterns at all when the fly turns, no matter what the fly does; one in which there is a vertical stripe giving the fly some feedback about its turning behaviour; and finally, one in which the pattern inside is uniform with some texture. So, in the last two cases, there is external information coming to the fly's brain. But, in the so-called 'open loop' environment, there are no cues or external information given to the fly. So, the experiment is designed to measure if the fly actually performs any strategy, or engages in some cognitive process - even in the absence of any input. That would debunk the idea that flies have no internal experience and are just blind sensors that react to external input. Without having to repeat the experiment, but applying BDM to the behavioural sequences that were extracted from that paper, there is a sequence of left-right turns of the fly's intention to change direction that were recorded for the experiment. We were able to confirm that flies do not fly randomly in the absence of information, but implement a strategy of heading towards a single direction first - probably hoping to find some feature indicating their whereabouts. But they also performed the computational work according to our approximations of logical depth, because despite the simplicity of their movements, they were as sophisticated as those performed in the uniform case and also in the one stripe experiment. Whereas, in the original experiment, they had to actually see brain activity by connecting diodes in their brains to determine how much cognitive activity they were performing, in line with what we saw with our measures. In another experiment, [this time involving] rats - an animal of apparently much greater intelligence than insects such as flies or individual ants - rats were presented with a computer program that would challenge them in their cognitive abilities, to predict an outcome consisting in a signal that would be shown in one of two holes. And, if they get it right, they would be rewarded with sugary water. It will show a sequence, produced by an algorithm of increased sophistication and length, and there will be three of these increasingly sophisticated algorithms trying to fool [the] rats. The first one was [a] very simple one - the simplest among the three - and it would be completely repetitive, showing the signal in a cycle of period two, So it would show in one hole, and then the other and so on - completely predictable. And, after some attempts, the rats would crack it and get... 100% of the rewards. Clearly the rats were up to the task of outsmarting a very simple algorithm, like this one. The hypothesis to test was to see what strategy the rats would take when outsmarted by a more sophisticated algorithm, and whether they would still be engaged cognitively in some way, or [would they] just give up and behave randomly as a result of giving up. Here again, our results were also in line with the results from the original experiment, and they showed that rats faced with algorithms that outsmart them - making them fail in predicting in which hole the signal would appear - rats embraced pseudo randomness, meaning that they were purposely trying to behave [randomly] as a strategy. This may mean that trying to behave [randomly,] as opposed to behaving [randomly,] is actually some sort of strategy - and a strategy probably taken by some sophisticated animals, such as rats when they are outsmarted by competition or a predator. One can see in the figures... how randomness increases from algorithm one to three, even when the standard variation is as small for algorithm three but the median of the rat behaviour is actually larger - a perfect monotonic progression. So, we can see in our results how the sophistication of the algorithm from one to three increases But, when outsmarted by algorithm three, the logical depth of the animal behaviour does not drop, but remains as high as for algorithm two that was already being a cognitive challenge for the rat - as opposed to both the complexity of algorithm one and two. Moreover, one can also see how the behaviour matches the reward in algorithm two and three, indicating that the rat was almost as successful embracing a random strategy for algorithm three, than when faced with algorithm two. We were also able to pinpoint the exact moment in time when different rats were able to crack different algorithms - with algorithm three, labelled as 'Competitor Three' in these figures, remaining mostly unbreakable, as the behaviour of the rat never comes down and remains as complex as the algorithm itself, producing pseudo-random behaviour. For the other two more simple algorithms, however, the rat eventually sees through and diverges from the algorithm behaviour. To make sense of these plots, take into consideration the scale of the y-axis Finally, let's move to an example with humans. And this time, it is us that performed the experiment from start to finish, with the participation of about 3,500 people. And that has its rewards, because the world's media got very excited [about] our results and made articles about them. The experiment consisted [of] five tasks asking people to produce randomness in different ways. In the first task, they were asked to produce a sequence of heads and tails as if they were tossing a fair coin. Then they were asked to picture a random order of a pile of cards and guess which one was on top of the pile. They were also asked to produce a sequence as if they were throwing a dice. And finally, they were also asked to place dots at random and create a grid that would look random to them. The results were very interesting. We found that people at age 25 were able to produce the highest algorithmic randomness, measured by CTM and BDM. And this was consistent across all five tasks, and was independent of all sorts of variables that were controlled - because people were asked for their gender, educational background and level, language spoken and even paranormal beliefs. No other variable but age produced a difference, meaning that, for example, there is no difference in randomness production between men and women. The results are in perfect alignment with many other studies, suggesting that cognitive abilities peak at about age 25 before coming down. But do not freak out! The variation is minimal and barely noticeable. Here it looks large, because that is the intention of the experiment and these plots. But it does not mean that we will all become dumb or were dumb when we were kids. There are also ways to keep the mind sharp, and some traits can be exchangeable [with] others, because one may be able to compensate disability with more knowledge accumulated over time. And actually, perhaps it is the accumulation of that knowledge that makes us produce lower randomness after 25. But notice how interesting [this experiment was] because it is, in some way, [a] sort of reversed Turing test, where people - even if they didn't know it - were kind of competing against computer programs to see which ones were able to produce better algorithmic randomness. So, people, without knowing, were asked to behave as the longest computer programs producing the highest algorithmic complexity sequences. So, if these were again people [of] 25 years of age, [they] would have better chances [of beating] a random computer program and producing greater algorithmic randomness. So, in this unit, we saw how we can apply algorithmic information dynamics to behaviour and cognition of humans and animals. And, many questions and challenges remain open. And, we have also written several other articles that may be of interest to you. And even some independent groups are now using our measures in all sorts of areas of specialities, in particular psychometrics to test the cognitive abilities of the human mind. Here on the screen are some references and you can see how fruitful it has been. We call this line of research 'algorithmic cognition'. In the next unit, we will see how CTM, BDM and algorithmic dynamics can also contribute, to discover some interesting facts about biological and artificial evolution. [ end of audio ]