Our guest spot for Unit 2 is
Dr. Liz Bradley, who is professor of computer science
and also of electrical and computer engineering at University of Colorado
Boulder. She is a long-term external
faculty member at the Santa Fe Institute. And her research is on nonlinear
dynamics and artificial intelligence and some combinations of the two.
So Liz, I have a couple of questions for you. In our course
we’ve been learning a little bit about nonlinear dynamics and chaos
and I wanted to get you take on a couple of things.
So first could you give an example of how you’ve used tools
from nonlinear dynamics and chaos in your own research
to help understand complex systems?
Sure, Melanie. One of the systems that
most of us use the most frequently and never think about being nonlinear
dynamical, let alone chaotic, is a computer.
So you guys are all using computers right now to watch these lectures. Inside
the computer that you’re using are a whole bunch
of transistors and other kinds of things like that, most of which are nonlinear.
And they are certainly dynamical, because the computer is not just sitting
in there completely static. So things in the computer are moving
around. There are electrons moving around through metal and silicon and
it is a nonlinear dynamical system.
Computers many years ago were very simple.
And very predictable, in the sense that the designers would
do something and it would have the desired effect. But that stopped
about ten years ago. The systems
got to be so complicated, or complex I guess, that
a design innovation that was obviously
going to work, quote-unquote, had bad effects.
And they had to recall a whole bunch of chips and that is very expensive.
So we got interested in how to think about that and came up with the notion that
well, a computer is a nonlinear dynamical system, which was kind of a heresy
in the computer performance community, because they think of them -
they model them with stochastic processes.
They think of them as random systems and the mathematics that they
use to model them implicitly make the assumption of linearity and time
invariance. And the system is neither linear nor time invariant.
What is inside of them changes over time. We thought about this and
decided that it would be a good idea to use the tools from nonlinear dynamics
to understand computers. And we did that
and it worked out quite well. We were able to show
for example, the Lyopunov exponent - have your students had that yet?
No. Okay, so the rate at which sensitive -- at which small perturbations
grow -- The Lyopunov exponent is a
quantity that parameterizes sensitive
dependents on initial conditions, and a positive Lyopunov exponent means that
small changes grow. We were able to measure the
Lyopunov exponent of computer programs running on computer hardware
and show for example that if you run the same program on two different computers
on one, the performance is chaotic
and on the other, the performance is periodic. Now this is not to say
that the results are chaotic. You get the same results
each time. It’s the performance. So those of you know know more about computers
the way that the computer is using its memory
the way that the computer is using its processing units changes
depending on how the way that they’re built. And for those of you who are engineers
that’s pretty obvious, but no one’s ever really thought about it using
the tools of nonlinear dynamics, and that really helped, I think, the computer community
come to a better understanding. Now there is
another problem going back to that word heresy. People in
this community again are used to using these linear time invariant
tools which are easy to use. That’s a good reason to use them. They’re easy.
But, if the system that you’re wanting to analyze is not
amenable to that kind of analysis, it’s much more complicated, then
we found ourselves in the position of coming into
another community and saying, what you’re doing is wrong. Which is never
welcome. And moreover,
the mathematics that we’re offering you is very
hard to learn and doesn’t always work. So
we haven’t been able to clone that work into the
computer systems community literature as much as I wanted.
But that’s a problem with doing work at the divide between fields.
Yeah, so do you see -- when you say that the memory
usage, for instance, is chaotic
do you mean that specifically it’s sensitive
to initial conditions? Yes. If you run the program twice
and you watch a time series plot of how busy
the memory is over the course of time
it will look very very different from run to run. And what changes in the initial conditions?
Oh, it’s -- if you think about a computer, the state variables
of a computer are the contents of every register in the computer, every memory
location and there are other things going on in your computer
so right now, as you’re watching this lecture, you probably
have a browser running, but you probably have other stuff running too and some applications
in the background that are changing some of those memory locations. So those are
the small changes. The butterflies.
And were you able to prove that what you’re seeing is chaotic?
Oh yes. Absolutely. Okay, and how do you do that?
We measured the Lyopunov exponent. We calculated it
from time series data and it was positive. And then we did
all sorts of other - when lawyers do what they call due diligence, you got
to pound on your case from all directions to make sure that it’s airtight.
We did the equivalent of that with nonlinear dynamics, so I actually believe
the results. Okay, and -- But there’s no proof. There’s no proof
in chaos at all. This is experimental data. Maybe as soon as we quit looking
it went periodic. In our class we looked at the logistic map
and we saw period doubling route to chaos. Do you see something like that in your data?
We have not explored the bifurcations. It’s hard enough
just to characterize the thing when it’s at just one parameter body.
But to draw the parallel,
the bifurcation parameter for us is the code.
So if you change the code, you run a different application, that’s what causes the
bifurcations. So if we have an Intel Core 7 blah blah blah
and we run one program, it’s periodic, and we run another program, it’s chaotic.
The bifurcation parameter is the code that you’re running and the hardware that you’re running it on.
But there’s no way to think about changing it smoothly, like you can
change the r of a logistic map smoothly. Hmm. Right. That’s interesting.
Yeah, it’s very different. Okay, well let me
switch to another question, which is, what do you
think of as the exciting current directions for the field
of dynamics? What are the open questions?
There’s lots. But one of the really interesting ones lately
has been understanding the formation and
role of what are called Lagrangian coherent structures.
So Tom Peacock at MIT works on
these and he says that -- here’s the analogy he uses to describe them:
Imagine a crowd at a railway station.
Some people will be arriving and some will be leaving. And
they’re kind of going back and forth in between different platforms. The result is chaos
but there’s structure. So if you had a stop
action photo of the Tokyo subway, you’d see this buildup of people and then
they all leave. So there’s a shift during -- it’s an emergent thing
it’s a shifting pattern of borders between groups of people
and of people with different goals. And those borders
those borders of the groups of people are what are called Lagrangian coherent
structures. And he says they’re intangible
they’re immaterial, they would be undetectable if the passengers stopped moving.
But they are real enough to be treated mathematically.
Could understanding this
affect any policies about --
Yes, absolutely. For example, Tom has done some work on
and others have done work on Lagrangian coherent
structures in Monterey Bay. And if these things are boundaries
between groups of stuff flowing around, they have implications
for the movement of pollutants. So a Lagrangian coherent structure
is a ridge that separates two different parts of water
in the Monterey Bay, and pollutants can’t cross it. And some
of them are beautiful. Tom was down in Australia looking at something called the Morning Glory
Cloud, which I highly recommend that you Google, if you’ve never seen it. It’s this gorgeous
Lagrangian coherent structure that forms in the clouds
over, I think, in the middle of Australia someplace.
A complex system is a system with lots of state variables
and they’re coupled, or else things would be
pretty boring. Mm-hmm. Often they’re coupled nonlinearly, sometimes
they’re coupled adaptively, that means the way that they’re coupled changes over time.
And
that’s kind of the setup. And then the thing that makes them at least interesting to me
is that you could certainly have that everything is going off in all directions and everything’s random
but the thing that really is interesting to me and what I think a lot of complexity scientists
what turns them on is that when
the behavior of such a thing only occupies a
subset of the state space. We call that emergence,
for example. We could call that dimensional reduction.
So you imagine you’d have, going back to the computer, it has
ten to the ninth state variables, or something like that, some tremendous number of state variables.
But it does not
travel around in that ten to the ninth dimensional space. What we found is that it only travels
around in about a twelve dimensional subspace of that. Hmm. And that
is kind of amazing. The same thing happens if you think about bird flocking
You could think of it that way. Any bird could be anywhere. But they don’t do that.
They travel around together. So there’s some dimensions, there’s some information that is gone
that they’ve kind of packed together and if you measured if you
did an information theory kind of thing -- I don’t know if you’re going to do that in your course -- We are, yeah --
Okay, so the Shannon information theory. You can think about
how information in the system is -- what you need to know to specify
what’s going on. And if each bird could be anywhere,
you would need more information to specify where the flock was
than if they’re all kind of in a V. There’s an information theoretic
way to think about this, too. So all of that is
part and parcel of what I think of as complexity.