Introduction to Renormalization
Lead instructor: Simon DeDeo
2.1 Markov Chains » Quiz Solution
1. What is renormalization?
A. a way to describe the effect of coarse-graining a system.
B. a way to compare two coarse-grainings.
C. a description of the relationship between a model that describes fine-grained data, and a model that describes the same data after it has been coarse-grained.
D. all of the above.
Answer: (D). While (C) is the basic definition of renormalization, it's also the case that you learn a lot about what it means to coarse-grain a system when you see what happens to a model of it (answer A); you can learn a lot about the differences between two coarse-grainings by the different models they induce at the coarse-grained scale (answer B).
2. Which of these datasets would be (exactly) described by a Markov Chain?
A. the sequence of heads or tails from uncorrelated coin tosses.
B. a pair of rock-paper-scissors playing robots whose moves depend on the prior move.
C. sentences produced by a native-language English speaker.
D. (A) and (B) but not (C)
Answer: (D). (A) is a very simple Markov chain; in the representation used in this lecture, it would be a two state system, where one state is "Heads" and the other "Tails", and there's a 50% chance of staying in each state once you're there. (B) is a more complicated chain; at each time-step there are 9 possible states: (Rock by Player One + Rock by Player Two), (Rock by Player One + Paper by Player Two) and so on. The rules that each robot follows lead to the move at the next timestep; so say Robot 2 responds to (RR) at the previous step with P, and Robot 1 responds to (RR) with S; then one of the transition rules of the Markov chain is (RR)->(SP) with 100% probability. (C) is a bit tricky; while it might seem obvious that English-language sentences have "long-range" rules from syntax that make a Markov model inadequate, it's a bit hard to prove that in practice, and some very simple Markov chain models of the English language actually work remarkably well (see, for example, the Disassociated Press algorithm). For more on this problem, see "Collective Phenomena and Non-Finite State Computation in a Human Social System" http://journals.plos.org/plosone/article?id=10.1371/journal.pone.0075818
3. What are we talking about when we say the parameters “flow"?
A. how the parameters of a model change when you coarse-grain the data it's meant to describe.
B. a peak emotional experience characterized by concentration and energized focus on a challenging task, first defined by the psychologist Mihály Csíkszentmihályi.
C. how the clusters in a coarse-graining change when you alter your coarse-graining algorithm.
D. how a timeseries evolves at the coarse-grained level.
Answer (A). The technical term for this is “renormalization group flow” (or even, for a mouthful, “renormalization group transformation flow”). Sadly, not yet (B) but perhaps you’re feeling it at points in this tutorial.
4. What is a fixed point in renormalization?
A. what the data looks like after you repeatedly coarse-grain it.
B. a model that produces a uniform and unvarying output.
C. a model for a system that doesn't change when you coarse-grain the data.
D. a set of data that doesn't change under coarse-graining.
Answer: (C). A fixed point refers to a particular model (or subset of models within a bigger class of models, such as a set of transition matrices where the rows are all the same), so (A) and (D) are incorrect. But it's also important to recognize that (B) is also incorrect; a fixed point in model space is often really interesting, with lots of cool dynamical structure.