Hi, I am Uri Wilensky from Northwestern University and I am the author of NetLogo. I am delighted to be here with you today. You are fortunate to be in the capable hands of Bill Rand. Bill worked with me at Northwestern, and we wrote a textbook on agent-based modeling with NetLogo. I am going to say a few words of introduction to agent-based modeling and NetLogo. So I am going to situate this introduction in a kind of unusual example. I want us to go back to the time around the turn of the first millenium around 1000 AD [when roman numerals first came to Europe. Before that Excuse me,] when Hindu-Arabic numerals first came to Europe. Until that the Europeans had used Roman numerals. And when they tried to use Roman numerals, Roman numerals don't have a place-value notation, a positional notation, so the numbers could grow very, very, very, very big. And also doing multiplication and especially division was very, very hard to do in Roman numerals. So although scientists recognized early on that this was a superior way of doing arithmetic, it took a long time for these ideas and these practices to spread widely throughout Europe. This phenomenon of a change in the encoding of the knowledge in a domain is something that my colleague Seymour Papert and I call Restructurations. So, a Structuration is encoding of the knowledge of a domain as a function of the representational infrastructure and Restructuration changes from one encoding to another. And one of the powerful things about the Roman to Hindu-Arabic Restructuration was that before only rare people could do multiplication and division. You had to take your problem to some important person who would work on it for a long time and give you back the answer. Whereas after this Restructuration almost everybody could learn to do basic algorithms of arithmetic and science could progress in major ways because they could use large numbers and positional notation. So having introduced that, we can ask: what are the hard things that are important that people have to struggle with today? And one important area is the difficulty that we have in making sense of complex systems. And it is an area that is right for this kind of Restructuration. And by complex systems I simply mean systems that are composed of many different parts but there are interactions between the different parts. And without any centralized control or designer global pattern emerge from the local interactions in the systems and decisions. Some typical examples of these include ecosystems, economic systems, immune systems, the stock market. All these are examples where there are interactions of parts that create some global pattern. And I use the word 'emergent': an emergent phenomenon is how we describe the organized pattern that emerges, results, from the interactions of many of these different, distributed agents or distributed objects. And they can be notoriously quite hard to understand. So emergence is hard in two distinct ways: if you know the micro behavior of the elements you can know how the elements interact; it is still very difficult to predict what the global pattern will look like. And conversely if you know the global pattern it is very difficult to find the micro structure that generates it; the rules, if you will causing the agents of the parts to interact. In fact you can think of that second one as a big part of the project of science: as we observe these macro regularities in the world and then we try to understand what elements come together to enable those patterns to arise. Technology can help us by creating new representations, just like the Hindu-Arabic Representation helped people be able to solve complex mathematical and scientific problems. Similarly computational technology can allow us to create new representations of these complex systems and be able to make sense of them. And we do that now by actually simulating these complex systems and creating these individual elements or agents, and give them rules of interaction, and let those interactions unfold. So, agent-based modeling is exactly that. An agent is an autonomous, individual element of a simulation that has properties, actions and characteristic behavior. And the activity of agent-based modeling is the activity of looking at phenomena in the world and try to dissect them into these elements or agents, give those agents the rules that will actually generate the phenomena of interest. And this agent-based modeling is now used widely through the natural sciences, the social sciences, engineering professions in many, many ways. About 20 years ago I created the language NetLogo as a way to do agent-based modeling. And I was guided by two principles: Low threshold, that it ought to be very accessible and that people can start to do modeling very, very quickly. And I guess you guys will be the judge whether I succeeded at that. And the other is high ceiling, that we ought to be able to do really complex, difficult, cutting-edge science with this agent-based modeling environment. So I am going to show you three examples how you can model complex systems and emerging patterns with NetLogo. The very first one I am going to show you is one of a forest fire. So I have set up this model now, it is a model of a forest fire in NetLogo. You can see all these green pixels and those are meant to represent trees. And you see that there is a slider here, called density, that is set to 57%. And that is saying that 57% of the empty space is filled with trees, roughly 57%. And if you notice on the left edge, this is leading edge of a fire. And for this model is set up such that there are very, very simple rules. Every tree looks to its North, its East, its South, and its West. If it sees any fire in any of those places, it lights up and burns. Otherwise it does nothing. That is the entire rules of this model, except for a little bit of coloring. So when I press the "go" button, the model will run. As you can see, the fire is spreading a little bit. And now it burned out. And you can see it did not actually burn that much of the forest. But we can try again and with the same density we can run it again. Each time it will be a little bit different because the trees are different and the fire is different each time. So you can see, though, that in both cases at 57% density we did not burn that much of the forest. If I move the density up to, say, 64%, and now let the fire burn. We have a much more dramatic, much more full burn of the forest. And this is perhaps a little bit surprising, because we are used to thinking that just a little more density or a little more x would lead to a little more y. So that little more density would lead to a little more burn. We are not used to this very dramatic change. But in complex systems this is a very common phenomenon. With the notion of threshold or critical point, or sometimes popularly known as a tipping point, where just a little bit of extra density leads to a dramatically and qualitatively different notion of fire or burn of the fire. The next example I am going to show you is called wolf sheep predation. And here you have wolves and Sheep that are interacting in an ecosystem. And the rules of this model are just a little bit more complicated: Each wolf and sheep starts out with a little bit of an energy store and moving costs them a little bit of energy. But if a wolf runs into a sheep, and in this model, its simple vacation[?] is that the wolves and the sheep move randomly, and if a wolf runs into a sheep it eats it. And if the energy of the wolves dips below zero, then they die, and if it eats a sheep, obviously, it gains energy. So let's see what happens when we run this model. And we can see both in the graph on the lower left and in the view, in the main [??], that we are getting some kind of cycling, at least at first, of the wolves and sheep. But now all the wolves died out because they did not have enough sheep. The sheep got low, there were not enough wolves. And now because the way the sheep run are unconstrained they are going to inherit the earth and populate the model. And if I run it again it might not go the same way or it might go the same way. It could be that the wolves would eat all the sheep and they would remain. But then they would not have anything to eat so they would go extinct. So those are the two attractor states of this kind of model. And it looks like in this particular case [looking like the wolves, eh] the sheep are winning again. So. But you understand the idea here that you can give these individual rules to individual wolves and sheep and then see the population level outcome. In my last example, this is going to be one from social sciences, and it is originally due to an economist named Thomas Schelling who was at Harvard in the 1960ies and wrote a famous book called Micromotives and Macrobehavior, in which he looked at this kind of issue of how do individual interactions lead to population level outcomes. And in this case he was interested in the phenomenon of housing segregation. And he asked himself a question: if there were two types of agents, reds and greens, and they were willing to live together but they had a tolerance if their neighborhood got too much the other way, a red was surrounded by too many greens or a green was surrounded by too many reds, then they would become unhappy and move from where they are. And so he was interested to find out what would happen. And here the slider "%-similar-wanted" is set to 30% which means reds and greens are content to be in neighborhoods that are 70% not like them, but when it gets more than 70% not like them they start to move. If you run that model we see that we get very segregated neighborhoods. Now, when Schelling originally did this work, he did it with checkerboards and nickels and dimes and quarters and it took many, many months to achieve these results. We can now see with agent-based modeling how quickly we can see that we get this kind of segregation. Now, this kind of use of agent-based modeling in social science used to be quite controversial maybe perhaps still is a little bit. Some people might argue: well, people are not like ants, they don't have simple rules. But in this case that prejudice may be an important factor during segregation. But Schelling's point was: if our social goal is to remove, get rid of, housing segregation then it won't be sufficient to just use prejudice as a lever. Because, as long as there is still a little bit of preference for being with your own kind, the segregation will assume. So, these are three examples of using NetLogo to, you know, represent and model these somewhat complex phenomena. I want to point out to you: there are complex equations that usually govern the way that these things are modeled. This is the example of fire spread where you have a fluid flow equation and a heat equation. They are partial differential equations. And in NetLogo these can be represented by simple rules and simple code. And similarly for the predator prey case we have a couple of differential equations and again they could be represented by simple code. So, to conclude, the agent-based modeling perspective is that these macro large scale patterns in nature and society are usually the result, maybe perhaps always the result of the interaction and accumulation of large numbers of elements, each of which has their own rules of action and interaction. And to understand a lot of the phenomena in the world, we can model it and simulate it as elements that are obeying a few simple rules. So, thank you very much, and I know that you will enjoy Bill's class.