The logistic equation with r equals 4 isn't just aperiodic;
It also has sensitive dependence on initial conditions, the butterfly effect.
In order to illustrate this, I'll need to use a program similar to the one we've already used
to plot logistic equation orbits.
This program is...
There is a link to it down here beneath me,
underneath the embedded youtube video on the complexity explorer page
as before, if you click on that, a pop-up window, a new window should appear
and the program will be in it.
I've also made a separate page with links to the two programs
it's over here,
somewhere on the navigation bar on the right,
I think it's section 3.7, it might be 3.8, it's called 'logistic equation programs'
there is only one item in this section, and it's a text page with links to these two programs
So, this way you can find them more easily and you don't have to hunt
through the various video pages.
In any event, I will use the second program to illustrate the butterfly effect.
So, here is a program that will let us investigate sensitive dependence on initial conditions.
Before I consider the parameter 4, that is the one we are interested in,
let's go back and I'll consider the value 2.4,
and for this value, I think we have seen before, that there is a stable attracting fixed point.
So, as in the other program, we get to choose the number of iterations displayed.
I'll choose 25.
And now, rather than one initial condition, we can choose 2 initial conditions.
And it will plot two different time series on the same axis.
So, let's choose two, as for 0.2 and 0.45.
Two different initial conditions.
And I click here and make the time series plot.
And, so, let's see:
What I call the second initial condition,
that's in green, and the first initial condition at 0.2, that's in purple.
And these two initial conditions start kinda far apart, 0.25 apart,
but very quickly they both get pulled to the same attracting fixed point.
That's what it means to be an attractor, it pulls in your nearby orbits.
OK. So, there is another plot that this makes, and we can see that if we scroll down
here, and, let me see if I can get both of these plots in at the same time.
Ok. So, the blue curve is the difference between the green and the purple curve.
So, it tells us how far apart the two orbits are.
The two orbits start 0.25 apart, or -0.25.
I guess it is set up to do purple minus green,
but the sign, if it is positive or negative, it doesn't really matter
for what I am trying to illustrate here.
So, they start 0.25 apart, and then very quickly they are just about 0 apart.
Because the purple and green orbits or itineraries are right on top of each other.
So, again, the blue curve is purple minus green.
It's the difference between the two, the separation between these two orbits.
And, because it's attracting, these two orbits get closer together, this blue curve gets closer to 0
in absolute value.
Lastly, as before, this program will also output some numbers.
So, I call the two time series x and y, so here is x equals 0.2, 0.384 and so on,
and goes to the attracting point at 0.5833,
and y meets a similar fate, it ends up at 0.5833, and this is the difference between these two,
it's x minus y, starts off at -0.25, numbers go positive and negative, but they are getting
closer to zero.
And, by the time we get here,
at least with the precision of the computer, the two orbits are exactly the same.
So, this is just another way of seeing that we have an attracting fixed point.
It is another way of visualizing what it means to be an attractor.
Nearby orbits are pulled closer together, and that's what this blue curve is showing.
The blue curve is getting closer and closer to zero.
Let's now try a similar experiment, but for an r value of 4.0
Remember, this was the r value that gave aperiodic, or non-repeating behavior.
So, I'll go back to the top.
I'm gonna change this to 4.0
And now, I'll pick the two initial conditions quite close together:
0.2 and 0.21.
So, let's make the time series plots
and see what happens.
There we go, OK.
So, we have one initial condition, or one itinerary in green, and one in purple.
So, notice that the two itineraries, the two orbits are pretty close for about 4 times
They are pretty far apart here, though, at least by 0.1, and then they start to get very different
by time step 5.
And by the time we are at 6, or 7, ot 8,
they seem completely uncorrelated, they are doing their own thing.
And who would have guessed that they started off so close together?
We can see a similar thing if I plot the blue curve, that's the difference between the two,
Let me shrink things down so they fit on the screen.
So, the blue curve is the difference between the purple and the green curves, it is how far apart
these two trajectories are, these two orbits are, as a function of time.
The difference starts off small, initially just 0.01, and it remains small, has a few little bumps,
but around 5 the difference spikes up.
It's actually greater than 0.25, it looks. And then the difference wiggles around.
Sometimes it's positive, that means that purple is larger than green, sometimes is negative,
that means that green is larger than purple.
But the main thing is that it starts small, and then becomes
large, either positive or negative, quite quickly.
So, to make this example a little bit more concrete, let's go back to thinking about the logistic equation
as the model for population of rabbits on some island.
And let's imagine that the true population of rabbits on this island is 0.2.
Remember that just means that it's 20% of the way to the apocalypse number, the annihilation number.
And let's imagine, moreover, that the only thing that determines the future of the rabbits
is the logistic equation.
Again, nobody thinks that the logistic equation really controls real rabbit populations,
but, let's just, as a thought experiment, suppose that was the case.
And so, then, the first initial condition, I can interpret that as the reality, what's actually happening,
and what will happen.
And the second initial condition,
I will think of that as a measured value. So maybe we send some folks to this island, and they count
all the rabbits, it's not easy to do, and they get a value of 0.21
expressed as a fraction of the annihilation population.
So, this rabbit research team has overcounted the rabbits by a little bit, totally understandable,
rabbits kinda all look alike, they hop around, they have counted some of them a couple of times
instead of just once.
No big deal.
So, let's see, so now in this scenario, the first initial condition which is purple, it is the reality,
what actually happens on this island, this very simple island, and the green curve is our prediction.
And we've measured pretty well, we know the rule exactly that determines the rabbit behavior
so, the first couple of years our prediction is pretty good, but then the prediction gets bad here,
even worse by year five, by year five we drastically underestimate the number of rabbits.
Int his context, the blue curve can be thought of as prediction error.
It's the difference between the actual behavior of the rabbits and our predicted behavior of the rabbits
based on the initial condition that we measured.
So, at this point I might ask: this is kinda disappointing, how can we predict the rabbits better?
We would like to be able to say, with more certainty,
what they are doing past 5 or 6 years, or generations, or whatever.
And so, since we know the rule that determines them exactly,
the thing we need to know in order to do better prediction, is to measure the number of rabbits
more accurately.
So, we send the team back and we say, alright,
be really really careful this time.
We need as precise a measurement of the rabbits
as possible, as close to the true value as possible.
And let's say, let's just imagine, that they come really really close.
So, rather than, they don't quite get 0.2 exactly, they get 0.2000001. I'll put one more zero there.
So there we have, then, a couple of parts in a million.
So, it's easy to just enter those numbers in the computer, but in reality doing a measurement
that accurate is almost impossible, certainly I think,
for any population it's impossible to census to one part in a million. Probably even for people,
most certainly for rabbits.
And it's hard to measure things this accurately
even in a physics, or chemistry lab.
It's possible, but it's not an easy thing.
OK, so our measurement now is about a million times more accurate.
So, shortly, we'll have better predictions, the predictions that we'll be able to make with
this initial condition in the model will be better.
So, let's see if that's indeed the case.
So I'll click on 'Make the time series plot'
and indeed, the two curves are right on top of each other.
The green curve covers up the purple curve, for quite some time.
We can go, maybe all the way out to almost 20, let's say.
By the time we get to 20, the two curves start to become, to look quite different.
So, our prediction is great for about 18 to 20 generations, but breaks down as wrong after that.
Let me plot a few more iterations, I'll go until 40, see what that looks like.
So, the prediction is great, the green prediction is right on top of purple reality
until about year 20, in which case the two curves, prediction and reality, diverge and appear completely
unrelated to each other.
Let's look at the blue plot for this.
So, the blue plot is the prediction error.
It's the difference between the two curves,
it's the difference between our prediction and the reality.
So, the difference is very very small, it's near zero, and then around 16 or 17 it starts to grow
just a little bit.
And then it explodes near 19, 20, 21,
and becomes very large, and we have huge error.
So, this is a way to visualize the phenomena of sensitive dependence on initial conditions,
often abbreviated as SDIC.
This is also known as the butterfly effect.
The idea is that very small differences in initial conditions,
here about one part, a couple of parts in a million,
getting magnified and then becomes very large.
So, the blue curve starts at 0 and becomes large, or non-zero, positive or negative,
The blue (green) and purple curves start right at each other, and they spread apart.
The logistic equation is deterministic.
It's just an iterated function, an action repeated
again and again and again.
So, it ought to be completely predictable.
And, in some sort of sense, it really is.
However, in order to make accurate predictions,
one needs an exceedingly accurate value for the initial condition.
So a small error, again as we've seen, a small error, just a difference,
a tiny tiny difference becomes very very large very quickly.
So, it's not the case that it's unpredictable at all, but because it has this property of
sensitive dependence on initial conditions, the starting point matters so much it's...
for all practical purposes, impossible to do long term predictions.
Here's another way to perhaps think about this. Let me go back for a moment to this example.
So, here 0.2 and 0.21.
And the prediction that we are able to do here is
out until, well, prediction doesn't last very long;
let's be generous and say it lasts for
4 time steps, to the fourth iterate.
And after that, the prediction is so bad as to be
completely useless.
Actually that sort of sounds like a weather forecast,
not coincidentally.
In any event, we can predict out 4 timesteps,
4 days, let's say, or 4 generations.
So, if we measure more accurately, we should
be able to make better predictions.
And, indeed, that's the case.
So I will go back to this
So, now, I will measure more accurately and we can go out to, again, just for the sake of our --
let's say we can go to 20.
And after 20 timesteps, the measurements,
the prediction becomes essentially worthless.
So, on the one hand, yes, more accurate measurements do lead to more accurate
and longer term predictions.
However, in order to improve our prediction time
by a factor of only 5 we had to improve our measurement by a factor of a million.
So, we have to work a million times as hard to get a result that is only five times better.
So, yes, it's predictable, but the amount of work we have to do to predict is actually,
increases exponentially with how long we want to do the predictions for.
So, this is another way of seeing or thinking about sensitive dependence on initial conditions.
I'll do another example, or talk through another scenario and then I'll define
sensitive dependence on initial conditions more carefully.