# Complexity Explorer Santa Fe Institute ## Introduction to Renormalization

• Introduction
• Coarse graining Alice and Dinah
• Coarse graining part I - Clustering algorithms
• Coarse graining part II - Entropy
• Markov Chains
• Mathematics of coarse grained Markov chains
• Mathematics of Coarse grained Markov Chains: The General Case
• A puzzle: origin of the slippy counter
• Where we are so far
• Cellular Automata: Introduction
• Israeli and Goldenfeld; projection and commuting diagrams
• Networks of Renormalization
• Coarse-graining the Lattice
• From Quantum Electrodynamics to Plasma Physics
• Conclusion: Keeping the things that matter
• Homework

#### 2.3 Mathematics of Coarse grained Markov Chains: The General Case » Quiz Solution

1. What is the stationary distribution of a Markov Chain:

A. the set of distributions that don't change under time-based coarse-graining.

B. the probability of finding yourself in a state after waiting many many timesteps.

C. the set of distributions that are invariant under any Markov Chain.

D. all of the above.

2. Which best describes the renormalization group flow of (the vast majority of the) Markov Chains with N states, under a time-based decimation coarse-graining?

A. models flow from somewhere in an dimensional space to somewhere in an dimensional subspace.

B. models flow from somewhere in an ​ dimensional space to a single point.

C. models flow from somewhere in an dimensional space to a single point.

D. models flow from somewhere in an ​ dimensional space to somewhere in an N dimensional subspace.

Answer: (A). An arbitrary Markov chain with N states has a N lists of numbers; each list corresponds to a state, and contains the N probabilities that when it system is in that state, it will move to each of the other N states. So there are N(N-1) free parameters: for each state, the list probabilities have to sum to unity, and so the N numbers are actually determined by the first N-1 numbers.) Meanwhile, as you coarse-grain the system, the Markov Chain (usually) converges to a "super uniform" chain that immediately maps any distribution to the stationary distribution. To define that super uniform chain, we just need to know the stationary distribution over N states, which is defined by (N-1) parameters. In the lecture video, N=2, so we flow from a 2-d plane to a 1-d line.

A note for aficionados: there are a few “unusual” points in the N(N-1) dimensional subspace; consider, for example, the slippy counter with epsilon=0. These models do not converge to a corresponding super-uniform chain. But they’re “a set of measure zero” in the N(N-1) dimensional space (less pretentiously, it’s probability zero you’ll pick one of them by accident if you pick randomly among all chains). We’re thus justified in talking about the flow properties in this loose fashion.