A fundamental problem in science is the characterization of objects, such as sequences, images, networks and proteins. Complexity measures allow scientists to do some of these characterizations in a systematic way to better understand and even manipulate these objects. However, most measures are not independent of the different ways in which objects can be described. A network, for example, can be described by its adjacency matrix or by its degree sequence. Only universal measures of complexity, based on the notion of a computer program, can characterize the object - independent of the way in which the network is described. These universal and powerful measures, introduced by mathematicians, exist - but they are very hard to estimate numerically. The coding theorem method, based on the work of Kolmogorov and Chaitin, is a method that we've introduced in order to characterize the complexity of an object, based on these universal algorithmic measures. The idea is, that if an object can be generated by a short computer program, it is more likely to be generated by chance than an object generated by a long computer program. For example, consider the mathematical constant pi. If you wanted to produce the first 1000 digits of pi, the chances to do so by typing them at random are very low - but the chances to produce not the digits, but a formula that produces any number of digits of pi, are much higher - because pi has many short formulas that generate its digits. What we do then is to run a huge number of small computer programs, in order to count the number of times that an output is produced. From this, we were able to estimate how likely an object is to be produced by a computer program, building a distribution. This distribution helps us to characterize objects and devise applications that could not have been conceived before. The block decomposition method is a method that extends the power of the coding theorem method to approximate the algorithmic complexity of larger objects, such as sequences, images and networks. This method is based on Solomonoff and Levin's theory of algorithmic probability, relating the frequency of production of an object and its intrinsic properties. What the block decomposition method does is to decompose an object into smaller pieces, which we then cleverly stick together with a formula. Every one of these pieces is each produced by a small computer program, so we know that the sequence of small computer programs produced the longer piece. Because the decomposition does not always produce pieces of the same type, there are many ways to stick the pieces together - such as overlapping the pieces, embedding the original object in a topological torus, or assigning similar algorithmic complexity to objects that have similar statistical properties. One of the most interesting properties of the block decomposition method is that it globally calculates Shannon entropy in the long-range, but it provides local estimates about rhythmic complexity in the short-range. This hybrid measure of complexity helps scientists characterize objects in more complete ways, leading to enhanced understanding and new discoveries.