So in the first section you had
a brief introduction to cellular automata
there's a lot more to say about it,
but I at least got a chance
to give you a taste for
the different kinds of patterns
that each of the different
kinds of rules can produce
and I even gave you a little bit of a sense
of the magic of Rule 110
and 110 will come back a little bit later.
For now though what I'm
going to talk about is
how we can think about
coarse-graining a cellular automata.
And when we go from the Markov chains
of the previous section
to the cellular automata here
there's going to be one important change
and that change is going to affect
how we think about the renormalization process,
so if we go back to
the finite state machines,
to the Markov chain models,
if you have a system that you observe
every let's say one second time step,
and then
you decimate that system down,
you coarse-grain it
and so you now observe the system
every two time steps.
The original model in order to get from
t+1 to t+2,
you needed to know your position among the N states,
you need to know which state you were in,
and once you knew which state you're in
you could figure out
which state you were in at time t+2.
And similarly
to go from time t to t+1
you just had to know which state you were in at time t.
It's a Markov chain and so what that means is that
you have a finite and fixed number of states over time.
When we came to coarse-grain that model
we could still keep the same relationship.
In order to know where you were at time t+2
all you had to do was know your position
among all of the N states of the
new updated model at time t.
What that meant was that we stayed
within the same model class
when we dropped some of the time information.
We could produce a N-state Markov chain
that skipped every step
if we had an N-state Markov chain at the finer-grained timescale.
This changes completely
when we come to the cellular automata.
So if we take that grid
evolution algorithm
that we introduced you to,
what you saw was that to know what this
the state of the system here is at time t
at time t+1,
you needed to know the state
of these three grid squares just above it.
And then to know the state
of that point at time t+2
again, you only needed three pieces of information.
But imagine that we dropped,
we coarse-grained, we decimated out the t+1 data.
What do you have to know about this
or what do you have to know about the state of the system here
to know the value of this point here?
And as in
???, one of the people I learned this technique from
once said this is sort of like
what a physicist would call a light cone
because as you go further back in time
in order to know the state of this point here
you need to know the state of this point here,
but in order to know the state at this point here
you have to know the state of the system at
these three points here.
And similarly to know the state of
the system at this point here, you have to know
the state of the system here and here as well.
And so as you go further and further
back in time, as you drop
this chunk of data here
if you were to naively trying to do
the same trick we did with the Markov chain
which is just come up with a model that work
just as well to go from t to t+2
you would find that instead of having
a rule that looked like f of x, y and z
your rule would now have to include a, b, c, d and e.
You'd have to in fact have five
inputs to your function.
When you coarse-grain a cellular automata
only along the time dimension,
you no longer stay within the same model class.
If you said, okay, produce a model
that can get you from here to here
that model would not fall
in one of the 256 cellular automata
that we introduced you to
in the first section of this module.
So, what do we do? Well, this is a trick
that was introduced
by a pair of scientists
Israeli and Goldenfeld
and then I learned how to make it work
in person from ????? and ????? here
at the Santa Fe institute.
But here's one story about how to solve this problem.
If you coarse-grained in time
you don't stay within the same model class
Well, what Nigel [Goldenfeld] and his collaborators did?
What Goldenfeld and Israeli did
was ask you to think not about
individual grid points
at this later time point,
but pairs of grid points.
And then to look
at the backwards light cone,
the space of influence that this pair is
subject to.
So, one step back, it's pretty easy.
I just have to add one more point here.
So this is
the set of
grid points that I have to know in order
to predict the state of
this pair of grid points at time t+2
and then if I zoom one step further back
what you can see is that
I just have to include an additional grid
point here on the far left edge.
So these two grid points at time t+2
are influenced by all of these six grid points at time t.
What Goldenfeld and Israeli
then asked us to do
was to consider this supercell here
as a unit
call this let's say
a hat and these six cells here,
to group them
into three supercell units x hat, y hat and z hat.
If you write it this way,
then the logic of
the fine-grained simulation
means that in fact the state
of the a hat grid cell
is now equal to something, we'll call it f hat,
x hat, y hat and
z hat.
So the state of this pair here,
whether they're both on,
this one's on and this one's off,
this one's off and this one's on or they're both on,
that state of that grid cell or that supercell here
depends upon the state
of these three supercells up here
and this law here
is given
in the same form as this law here
with one change.
The original evolution law
for the cellular automata
takes binary variables, ones and zeros.
This evolution law here
takes pairs of ones and zeroes
instead of there being two possible states
for each supercell there are four.
And while this one here
outputs one or zero
this function here
outputs 00, 01, 10 or 11.
So we're part of the way there
to coarse-graining the cellular automata.
We're only part of the way,
we have something that depends,
we have a function that depends
upon three arguments and spits out one.
That's an improvement
over what we wanted to do before
where we had a function that takes in five arguments.
The problem is is that these arguments
are now no longer binary variables.
So, what do we do? Well,
instead of coarse-graining just in time
we can also
coarse-grain in space.
And what they introduced was
a projection operator p.
And so what p does is it takes a supercell,
it takes the state of a supercell
and maps it to a binary
variable 0 or 1.
So explicitly there are four
possible states the supercell can be in,
and what the projection operator does
is it maps some of them
to let's say the "1" state
and it maps the rest of them
to the "0" state.
You can think of this
projection operator here
as something very similar
to what we did in the introduction
to this module altogether
where I showed you how you
could coarse-grain an image
in that case the image of
Alice and Dinah, her kitten.
In this case here for example,
we could imagine
doing a spatial decimation
where we take only
the value of the first
entry in the supercell.
That kind of spatial decimation
would lead to
a projection operator
where both of these terms
were mapped to 0
and both of these terms
were mapped to one.