I'm going to try to abstract some common principles among the different self-organizing systems we've looked at so far. The way I'm going to try to do that is by looking at the unifying framework of information processing and asking about these systems as information processing systems. The first question to ask is "do self organizing systems – particularly in biology – process information?" Well, in the last few decades, it's become more and more common to view such systems as information processors. We can see that by taking a quick tour of some samples of the literature. In this literature, we see books called "Information Processing in Social Insects", papers in journals such as Nature: "Getting the Behavior of Social Insects to Compute". As far as the brain is concerned, we see books about neural information processing systems, the biophysics of computation. Bacteria play a role, too. People are now talking about information processing in bacteria, particularly with respect to the phenomenon called "quorum sensing", which we didn't cover here. The immune system is another example of a system people are looking at as doing information processing. Genetic circuits, cells and tissues. They're all now seen as doing computation of some kind. Even plants are now seen as being a venue for studying emergent distributed computation. Even more far out, perhaps, is looking at slime molds as a kind of computer, as is done in this recent book. For me, as a computer scientist, it's very useful to think about these issues by comparing information processing in computer science and in biology. So I'm gonna take a few minutes to do that. To start with, we can ask what plays the role of information in each system. In computer science, that is, the kinds of computers that sit on your desk, information is digital. It's static; it sits there in memory. It's passive in a sense. The central processing unit reads zeros and ones on some kind of memory storage. However, in biology, information is no longer digital or static; it's active. And it's analogue in the sense that it's not necessarily made up of ones and zeros. It's distributed in space and time over the system's components. Information in biology is represented via patterns of individuals and their products. For example, you can think of the ants and their trails as representing information about the sources of food and their locations, or the flocking of birds or the schooling of fish as being the kinds of patterns that represent information in the system. Information is gathered via local statistical sampling from these patterns. So for instance, an ant cannot see the whole information that's contained in the system, which might consist of all of the pheromone trails that have been laid by the ant colony. But they can gather some statistics by sampling the local concentrations of pheromone right where they are. Or when ants are trying to decide what task to adopt, they gather statistics about what other ants have been doing. How is information processed? In computer science, we know that information is typically processed via deterministic programs that are serial (that is, one step at a time), error free (programmers make a lot of effort to debug their programs), and centralized rules (that is, in a central processing unit) for reading, moving, and writing information. Whereas biology does information processing in a very different way. It's via the decentralized, local, and very fine-grained, and stochastic (that is, probabilistic) actions, actions that involve randomness. In these self-organizing systems we saw an interplay of positive and negative feedback. Positive feedback involved mechanisms such as recruitment (an ant laying a pheromone recruits other ants to follow the same trail) , reinforcement, where ants reinforce the trails that they see or fireflies reinforce the flashes that they see. There's also negative feedback that controls the positive feedback from getting out of control, such as competition, density limitations. When birds are flocking together, there's positive feedback between birds that, by having birds try to get closer to each other but also negative feedback by forcing them to separate when they get too close. In the same way, ant foraging has some negative feedback by having pheromone trails evaporate if not reinforced. We also see that randomness is ubiquitous, unlike in computer science, where randomness is stamped out. In nature, in biology, randomness is ubiquitous, and it's used by the system to its advantage. Finally, in biology, more and more, people studying information processing find that the language of dynamical systems may be more useful than the language of computation. Finally, an important question: how is it that this information that's being processed in these systems acquire some kind of function or purpose or even meaning? Well, we know in computer science that information processing is done for our own purposes. We run programs to compute things to help us do our jobs or entertain us or whatever purpose we have, whereas in biology, there's no meaning or purpose from an external source. Rather, it's the natural selection for adaptive function that gives rise to the meaning of the information. So that really is the meaning of the information, say, for an ant colony when it's creating foraging trails. Natural selection has shaped ants to have the mechanisms that create that information and that interpret that information because it has adaptive function for individuals and for the species. Well, computer scientists are getting more and more interested in becoming inspired by self-organized systems to create self-organized computing systems. The desire is to have life-like computing systems that have emergent behavior from simple rules rather than the rather non-life-like and brittle, complicated computing systems that we have today. And we've seen some of these. We've seen genetic algorithms, which was inspired by Darwinian evolution. Ant foraging has inspired a set of algorithms called "ant colony optimization" that have been used for things like telecommunications routing. Firefly synchronization and other related synchronization methods in biology have inspired distributed synchronization in computers and networks. As I said, Darwinian evolution has inspired genetic algorithms. Brains have inspired neural networks. Immune systems have inspired algorithms that perform computer security or network security based on ideas of how the immune system protects the body. And, as we saw earlier in these slides, slime molds have recently been inspiring computer scientists to develop new kinds of search algorithms based on the way that slime molds self-organize from individual cells into a unified whole. As we understand more and more about how self-organized systems work in biology, I think that computer systems will take more and more inspiration and become more and more lifelike. At the same time, as we apply ideas from computation and information processing to understand biological systems, more and more we'll see a kind of unified framework under which we can group all these systems in terms of how it is that they self-organize by processing information.