For Israeli and Goldenfeld
it's all about taking
one of two possible paths
and getting to the same place
One path says
Okay, look I have a fine-grained description
of the system
I'm going to evolve it forward
with my fine-grained rule and at the end
I'm going to simplify the answer
I'm gonna say look yes
I mean I kept track of all these details
but in fact
you know what?
I only need to know some of the output
I only need to know some of the final stages of system
So you can think of that as sort of walking along like this and then projecting up
Another way you can do it though
right, is you can say look you give you this fine-grained description
but I know I don't really care so much about this I'm going to project that down.
Here's my fine-grain beginning initial condition
I'm going to project that down to a simpler
description and then I'm going to use a new rule
that allows me to evolve
that simplified description forward.
So there are two paths
and if you've done it right,
they'll get you to the same place
evolve the fine-grained
system forward and project or
project and evolve the
coarse-grained description forward.
A mathematician would say
that these two operations
right, the operation of evolving forward
and the operation of projecting up commute
you can do one or the other
in either order,
and you'll get the same answer
A then B projecting then evolving is the same as B then A
evolving and projecting.
So let's see how this plays out with a particular example.
Again, I'll just take one from their paper.
This is rule 105. Rule 105 is quite similar
to rule 150 which you've seen before.
It takes the XOR of the three pixels
above the pixel in question
and then inverts them.
So that's the only difference between
rule 105 and 150 is that final inversion.
Another way to think of that is
equivalently, the output is black when
there's an even number of black cells above.
alright
So now you know what we have to do, right?
The first thing is we're going to consider
not the final state of one pixel
but the final state of two pixels.
And we're going to ask what happens
not when you take one time step
but in fact when you take two time steps.
And that means that those two final pixels
will depend on a group of six pixels
two time steps previously.
And now we'll consider those pairs
of pixels to be the supercells.
So you have a big supercell here, which is two pixels
[takes] four possible states
and you have three supercells up here.
So that's our f hat
What we have to do now is
find a combination, a projection
p that takes that supercell
and summarizes it,
simplifies it.
It maps each of those four possible states down to one of two possible states
We have a projection p and we want to
find an evolution operator g
that allows us to evolve forward
those projected down superstates.
So the p is what takes you from the
fine-grained descriptions up
to the coarse-grained description
and that g is what's going to take you
between to coarse-grained descriptions
at different times.
So you can either go
f, f, f, f, f, p or
p, g, g
right
For every two times you iterate f you're going
[of course] only to iterate g once
and this simple example
we'll just do the case where you skip one step
and so you have supercells
of size two
so
Fortunately it turns out
It is possible to find a p and g that enable that diagram to commute and here it is in the case of Rule 105
Right in this case the projection rule says look if the supercell has one cell
that's black and one cell that's white make that's all white if
Both cells are white or both cells or black
then make it black
It's sort of like an extra rule
itself in fact
Actually, it looks a little bit like an edge detector
it's as, look, if there's a difference within the supercell mark it one way
But if there's no difference in the supercells
that are homogeneous mark it the other way
If you use that projection operator then it turns out in fact that your g
Right which is now of course taking binary values right take a binary value because you projected the four possible states supercell down to a cell
It only has one of two possible states,
and that's what g operates on
that g evolution operator actually turns out to be rule 150
So what we've shown
is that it's possible to find a non-trivial coarse-graining
??? an interesting one ???
to find a non-trivial coarse graining and
an evolution operator that's still within the space of
cellular automata some of an evolution operator that enables that diagram to commute.
And so now just as we were able to talk about different kinds of
Markov chains coarse-graining into each other
we're now able to talk about how
rules coarse-grain into each other.
And in fact for a non-trivial projection operator Rule 105
course-grains into rule 150.
Here's what it looks like.
In the top you can see the
fine-grained level description
and the bottom you can see
the coarse-grained level description.
At the top there you can see
that we have the smaller pixels
and those smaller pixels are both
small in the x direction along this axis here
and smaller in the time direction
then in the coarse-grained case
the coarse-grained case of course
???mps pairs of states into one and then in fact the jumps are now larger
there are two time steps instead of one.
And by looking at the the comparison
between these two
you can sort of see what's going on, right.
First of all of course
the coarse-grained description is
capturing something interesting
about the fine-grained description, right.
We still have this idea that
these triangles are sort of
These these little perturbations that begin
That we begin with that lead to these expanding ways
right we still get that kind of wave like texture.
These sort of propagating spaces that
have kind of internal structure
But you can also see that
we are missing things too, right?
So if you look at those two
triangles at the fine-grained level
one of them is sort of darker than the other
But in fact when we coarse-grain the differences
between those two triangles goes away.
So somehow rule 150
when we evolve it forward is
operating our coarse-grained descriptions.
That's thrown out some interesting features of Rule 105
another obvious feature here that
distinguishes rule 105 from rule 150
is that we lose that kind of zebra
stripe pattern
and that's of course because if
A pair of squares is both white
or both black
the projection operator maps them both
into a square that's both black.
So we've lost some of the structure
both in the sort of places
where those opening
Propagating triangle waves have
sort of reached it
and also within the triangles themselves
This gives you a little bit of a better sense
now as we'll see it's not always the case.
That's the picture that you get
when you coarse-grain
looks
similar in some important respects
to the fine-grained descriptions
Here it's a particularly elegant example
of how we we're able to capture something about the rule
But of course not everything
we can't really capture everything of course
because that projection operator
is a lossy compression,
it throws out information.
And for rule 105 it really matters
whether everything is white or everything is black
But in fact the rule 150 when
we do the projection
it masks both of those cases to the same state.
so
Israeli and Goldenfeld hacked and they
hacked and hacked and hacked
and they looked at all 256 rules.
And they tried to figure out
how or the extent to which one rule
could course-grain into another.
So these arrows here
show you how it's possible
to find whether or not it's possible
to find a projection operator
and an evolution operator that
allows one rule set to map to another
upon that coarse-graining and
in fact they consider not only
supercells of size 2
but also size 3 and size 4 and
and computation of this starts getting really hard
because there's so many different kinds
of projection operators you can use
and there's so many different possible
evolutions that you can pick
that you start to run out of time
it gets exponentially hard to find a good projection operator,
that gets exponentially hard to
search the space.
There's only a partial map, but what
you can see here is for example
in the bottom the result that we just
talked through a little bit laboriously
which is the fact that it's possible
to find a projection operator
that takes you from state 105 or
from evolution rule 105 to evolution rule 150.
By the way one of the things you
can notice from this graph
is that it's clear that
Israeli and Goldenfeld
haven't actually found every possible
coarse-graining relationship.
And that's because there should be a
feature of this network
that doesn't actually happen.
And that's that if A coarse-grains into B
If A renormalizes into B,
if it's possible to find a projection
and evolution operator to take A and B
and B renormalizes into C
it's possible to find a projection that takes B into C
then it should also be possible
to renormalize A into C
of course now you're going
to be course-graining twice
and it's harder of course for
Israeli and Goldenfeld to find those
but if you look at this chart
what you should see
is for example the fact that not only
does rule 23 coarse-grain to rule 128
and not only does rule 128
coarse-grain to rule 0
but it also should be the case
that it's possible for rule 23
to coarse-grain all the way down to rule 0
just by doing two projections
and zooming out even further.
That said Israeli and Goldenfeld did a pretty good job
looking at an enormous number of possible
relationships between all of these possible rulesets
And I find these diagrams quite compelling.
It tells you something really complicated,
really interesting about how
deterministic rules
and deterministic projections
map into each other other.
One of the things that you'll see
from that network
is that not only does rule 105
coarse-grains to rule 150
but in fact rule 50 coarse-grains into itself.
So the pretentious way to say this
this is a fixed point
of renormalization group.
With that projection operator
you actually take rule 150
into a zoomed-out version of itself.
You sort of skip a step,
you project down the supercells
and you recover the same rules.
Now it's important to notice
there's a subtlety here, right.
It doesn't mean that
the image itself is self-similar
doesn't necessarily mean the rule 150
is kind of fractal in some interesting way.
Because the coarse-graining
may not be
the kind of coarse-graining
that just simply zooms out.
Consider for example the projection
we had going from rule 105 to 150.
Now wasn't a simple decimation in the way
that we did on the Alice picture for example
at the beginning of this renormalization module.
In that case, right, we're renormalizing Alice.
We took her picture,
we looked at little packages of cells,
and we just take one of the values to define
The value of that grid of
that larger grid cell
that's supercell in the Alice case.
But if you remember the
rule 105 to rule 150 projection
that worked and that case
was actually an edge detector.
If the cell was all white
or all black
it got mapped to something
that was all black.
So it doesn't necessarily mean
that if you kind of fuzz
rule 150 it still looks like rule 150.
It really depends upon the details
of that projection operator.
That said in fact you might
think of it another way.
Ihe rule 150 is a fixed point
of renormalization group
with potentially a much more interesting projection
than a simple decimation.