I'm going to show you three models that we've developed to illustrate the
ideas that we've been learning about Shannon information content.
The first is our Shannon Information Content of Coin Flips. So, to set up, we
see our coin here, and we can either flip a fair coin or flip a biased coin, with
some probability of heads. So I flip a fair coin, the coin flips, it gives me
heads, and it keeps track of the number of heads and the number of tails.
I can flip it any number of times, and it's just flipping at random, and after
I get some collection of these things, I can then calculate the information
content and it shows me what information content it's gotten so far.
So, even though this is a fair coin, we see that we've still gotten five tails
and only two heads, because we've only done seven flip, so we haven't
yet gotten enough statistics to really see that heads and tails each have
50/50 chance of coming up. We can also set our biased coin to any
probability we want of heads, and then flip our biased coin and see how
that affects our information content.
The second information content model is one in which you can measure
the information content of a text, just like we showed briefly in the previous
video. So, I have copied a text from an online site, which gives the
entire "To be or not to be" speach from Hamlet, and if I click on "go", this
shows me how many words there were and the frequencies of the different
words, so you can see these various frequencies, it only shows me the
top, small number of frequencies, but here it's showing me the frequency
distribution of these words, and the information content. So you can play
with this to see what is the measured information content of various texts
that you can paste in here, and we'll have some exercises on this in
the homework.
Our final model looks at the information content in the symbolic dynamics
of the logistic map. Let me show you what I mean by that.
So you might remember our logistic map, from our earlier unit on dynamics
and chaos, and I can do my---set my R to 3.51 and my x_0 of point 2,
and we get a periodic attractor, and what this is doing is it's calculating
symbolic dynamics, which means that I can set threshold---I've set a
threshold of .5 here---every time this dot goes above .5 on the y-axis, the
system outputs a 1, every time it goes below .5 the system outputs a 0.
Now, I can look at the information content in that set of messages that
consist of 1's and 0's, and you can think of the message source as the
logistic map at a given value of R. And now, this shows us the information
content of that message source, given these symbolic dynamics.
So this is another model in which you'll get to do some experiments on
in the homework. So the homework is optional, but I really urge you to do
at least the beginner level part, because that will give you the opportunity
to do some experiments with these various models, which I think will give
you a much better handle on the ideas I've been talking about, relating
to information and Shannon information content.