We've been talking about the Lyapunov exponent since
the very first unit of this course. And you remember the
idea. It's the exponent that parametrizes the exponential
growth of the separation between two points on a
chaotic attractor as time goes forward.
Now, there are actually n Lyapunov exponents in
an n-dimensional dynamical system. The convention
is that we call the largest one in this ordering
lambda 1, the second largest lambda 2, and so on
down to lambda n. If the system is chaotic, at
least one of the Lyapunov exponents is positive.
If the system is dissipative, the sum of the Lyapunov
exponents is negative. Lyapunov exponents are
dynamical in variance. That is, you can take an
attractor and deform it, and bend it, and twist it
and as long as you don't change the topology
of the attractor, the Lyapunov exponent will be
preserved. And that's part of why all that state
space reconstruction stuff is so important.
Because it allows you to reconstruct the dynamics
up to diffeomorphism from a scalar time series
data set. And diffeomorphisms are those
transformations that bend and twist without
changing the topology. And that means that you can
compute the Lyapunov exponent of the reconstructed
dynamics and be fairly sure that if you did it right,
that lambda is true of the underlying dynamics as well.
By the way, it also makes sense to think about and
compute Lyapunov exponents of systems that
do not have attractors - non-dissipative systems.
That's outside of the scope of this course, but
you should be aware that it's not only systems
that have attractors that have Lyapunov exponents.
This segment is about how to compute those
exponents. If you know the system equations -
the differential equations - then you can compute
Lyapunov exponents using something called the
variational equations. If you're interested in that,
take a look at the web page for the semester-long
version of this
course that I teach at the University of Colorado
under the "Liz's Written Notes" section and you'll
find some notes on how to do that. The usual
situation, however, is not that you have the equations.
That's extremely rare. Usually you have time series
data measured from the system and you want to
compute the largest Lyapunov exponent. The first step
in the procedure is to perform a delay coordinate
embedding of that data to reconstruct the full dynamics.
Again, if you do that right, the results are
guaranteed to be diffeomorphic to the true dynamics
and the lambdas are the same. The second step
in the procedure of calculating Lyapunov exponents
from data, after the delay coordinate embedding,
is to operationalize this picture. This is a picture I've
drawn several times now. It's the notion
of the apple and the tennis ball in the eddy.
You drop two points in a chaotic attractor, you
watch where both of them go, and you track the
distance between them, and that distance grows as
e to the lambda t. Now, this is a real challenge when
you're working with data because the data are fixed.
You don't get to drop the points at will or let them
go as long as you want, but rather, you have to work
with what you've got. If all I had, for example,
was the video from the field trip in the first unit, ie
I could not go down to the creek and drop in more
apples, but I was forced to use only the information
in that video, I could only track the dynamics of the eddy
where it was sampled by the apple and by the tennis
ball which I could track. How to get traction on that
problem? There are tons of approaches in the
nonlinear dynamics literature. Algorithms for
taking a trajectory from a system - that is, a finite
number of points from a system, maybe noisy -
and from that data, estimating the largest positive
Lyapunov exponent. The original one, which is
called Wolff's algorithm, was a direct operationalization
of this picture. It took a trajectory of a dynanmical
system, chose a point on that trajectory, looked for
that point's nearest neighbor, watched where both
those points went, tracked the distance between them,
and watched how that distance grew with time.
If you go to the webpage for the semester-long
version of this course that I teach at CU and scroll
down to the "Liz's Written Notes" section, there's
a set of written notes on that algorithm. Here's
the picture from that set of written notes. As you
can see from this schematic, Wolff's algorithm
tracks the distance between the points, not indefinitely
but only until that distance grows to a certain level
and then it does something called a renormalization
by looking for the nearest neighbor of the endpoint
and then repeating the whole operation. Here's
the algorithm from that and here's the formula
for backing the Lyapunov exponent out of the ratio
of those different lengths. Now, this picture gets
back to an issue that I raised a while back -
that business about how can you have exponential
growth in a bounded object? Back then, I waived
my hands about the answer. Now, you can actually
see that answer. Lambda 1, the largest positive
Lyapunov exponent, captures the average
long-term stretching as you move along the attractor.
That is - kind of the transverse stretchiness as you
walk along that original trajectory. There's a
complication here that arises from the fact that
there are multiple dimensions and as many
Lyapunov exponents as there are dimensions.
Remember though, that if you have a bunch of
exponentials, and you let t go to infinity,
the largest positive one will dominate.
And what I just said before was that lambda 1
captures the long-term average transverse stretching.
There's an underlying assumption here and in the
other algorithms for calculating Lyapunov exponents
from a time series that can be a little confusing.
Those 2 black points are both points on the same
trajectory. That is, this guy, is where the system is
at some time, t, and this so-called nearest neighbor
is where the system is at some earlier or later time.
So this notion of following them both forward in
time is a little bit weird, but it's completely okay
if your system is autonomous. That is, if the
direction that is dynamically downhill at a given
point is always the same, regardless of when or
how the point got there. That assumption underlies
pretty much all of methods for calculating
Lyapunov exponents as I said. If it doesn't hold,
that is, if your system is non-autonomous such that
trajectories can go in different directions from the
same state space point at different times, the
Lyapunov definitions and algorithms don't apply.