# Complexity Explorer Santa Few Institute

## Introduction to Renormalization

• Introduction
• Coarse graining Alice and Dinah
• Coarse graining part I - Clustering algorithms
• Coarse graining part II - Entropy
• Markov Chains
• Mathematics of coarse grained Markov chains
• Mathematics of Coarse grained Markov Chains: The General Case
• A puzzle: origin of the slippy counter
• Where we are so far
• Cellular Automata: Introduction
• Israeli and Goldenfeld; projection and commuting diagrams
• Networks of Renormalization
• Coarse-graining the Lattice
• From Quantum Electrodynamics to Plasma Physics
• Conclusion: Keeping the things that matter
• Homework

#### 3.1 Cellular Automata: Introduction » Quiz Solution

1. What do cellular automata and Markov Chains have in common?

A. the state of the system at time t depends only on the state of the system at time t-1.

B. both have deterministic update rules.

C. both have a finite number of states.

D. all of the above

Answer (A). Both cellular automata and Markov Chains are "Markovian", or memoryless -- they forget the past. It's true, of course, that what happened two timesteps ago can be very predictive -- both Markov Chains and cellular automata can describe persistent patterns, and it's why we care about them. But they are independent of the far past *conditional* on the recent past: there's nothing "hidden", and once you know everything about time t-1, then you can forget t-2 and make optimal predictions.

(B) is incorrect because Markov Chains can be probabilistic (remember the slippy counter from Unit #2). The standard cellular automata studies, by contrast, imagine purely deterministic rules. Note that this often means that cellular automata don't converge to "simple" models under very long-timescale coarse-grainings.

(C) is incorrect because cellular automata can become arbitrary large. The rule is simple (your state depends only on three things just "above" you in the past), but the number of states can be as large as you like -- just make the line longer to begin with. Of course, our computers have only finite memory, and can't store an infinitely long line, but there's nothing in principle preventing the cellular automata from growing. By contrast, once you fix the evolution rule of a Markov chain (written down the transition matrix), you've implicitly fixed the number of states (the dimensionality of the matrix).

2. How many different rules are there for the 1-dimensional cellular automata we consider in this unit?​

A. 8

B. 16

C. 256

D. infinite

Answer: (C). A cellular automaton is defined entirely by the update rule f(x,y,z), which can be thought of as a lookup table. Each argument of the function can take on one of two values (black or white; 1 or 0), so the lookup table has eight rows. A function f is then defined by a choice of output for each of those eight rows. That means that there are a total of 2^8, or 256, possible functions: each function picks one of two values for each of the eight rows.

While the update rules are very limited, a cellular automaton can produce a very wide range of patterns depending upon the length of the line it operates on, and the particular settings for the initial conditions. Rule 110, for example, is Turing complete: you can translate a line of Python code into a set of initial conditions, and run the simulation forward. Of course, the translation into initial conditions might require a very long line of states that the simulation operates on.