The empirical approach to cognition has traditionally been dominated by classical probability theory However, these approaches has been unable to explain some cognitive processes For example, the skill of remembering consecutive digits cannot be quantified by classical statistical tools such as Shannon entropy because they cannot recognize the algorithmic nature of a sequence that the mind can easily recognize and recall What was needed was a much more powerful, theoretical framework, and the introduction of new tools based upon the theory of algorithmic probability That is, the ultimate theory of induction and inference, formally connecting complexity and probability by way of universal computation. For example, if a sequence of events can be encoded by a short algorithm, then a mind can understand and compress that sequence even if it appears random to traditional statistical tools. This means, that statistical tools, such as Shannon entropy applied to inputs, outputs and internal representations are not suitable to describe or explain sophisticated mental processes. However, the theory of algorithmic probability applied to human and animal cognition provides the theoretical means to study these sophisticated processes and even explains what was traditionally seen as positive or negative biases in terms of choices that a mind takes. For example, humans will tend to find patterns in random data that the traditional view will mostly account for in the form of biased decisions based on past experiences rather than as a consequence of algorithmic process of the mind trying to find sense and logic even when there is none. We have developed these tools based on the principle that a mind is better explained not by statistical processes, biased by past experiences, but as an algorithmic probability estimator that has evolved to learn and exploit the algorithmic nature of the world. Paradoxically, this means the mind may actually behave less often as an irrational being, biased by personal experiences, and more like a general sophisticated, predictable algorithm. Indeed, early cognition research found analogies with computation. But soon found such analogy overly simplistic. Today, however, the bridge between the mind and computers we have made is more subtle, and more sophisticated. The bridge consists a model of the brain that conceives the mind as an idealized universal algorithm, capable of generating models. Our conceptual framework unveils evidence supporting these mechanisms of cognition. We have shown that landmark experiments on animals can be confirmed by simply quantifying the complexity of behaviour time series from different studies. For example, ants maximize communication efficiency as a function of sequence complexity. Fruit flies do not behave in the simplistic ways that were previous thought. Rodents harness the power of randomness when outsmarted by artificial intelligence. And, humans are most creative at generating randomness when they reach 25. These kinds of discoveries were, however, impossible to make before with traditional tools and classical measures, and they foreshadow algorithmic cognition as an exciting new field of research to tackle some of the tough pressing issues, related to mental disorders and neurodegenerative diseases.