Simon DeDeo's Homepage for this tutorial
A crash-course in information theory is: Information Theory for Intelligent People
Measuring information-theoretic quantities on data; plus, a crash course in coarse-graining and its relationship to information-theoretic quantities is: Bootstrap Methods for the Empirical Study of Decision-Making and Information Flows in Social Systems
A big-picture overview of coarse-graining can be found in Major Transitions in Political Order
This unit described how to coarse-grain Markov Chains that are already known; but what if you want to infer a chain from a time-series? In Conflict and Computation on Wikipedia: A Finite-State Machine Analysis of Editor Interactions we present a worked example, and introduce the SFIHMM code for machine-learning of time-series on data, available at http://bit.ly/sfihmm.
If you successfully completed , you've built what we call a ``joint machine''---a representation of the emergent behavior when decision-makers come together in a game. Group Minds and the Case of Wikipedia describes the properties of these representations in greater detail.
This entire unit is based on a series of papers by Navot Israeli and Nigel Goldenfeld -- in particular: Computational Irreducibility and the Predictability of Complex Physical Systems and Coarse-graining of cellular automata, emergence, and the predictability of complex systems.
We'll return to how renormalization can induce long-range couplings and interactions in the next unit, on the Ising Model. If a theory extends in time as well as space, we can think about long-range couplings as a new form of memory: if I'm directly coupled to distant times, I have a new form of memory. For an unusual perspective on the relationship between renormalization and memory, see Origin Gaps and the Eternal Sunshine of the Second-Order Pendulum.
The classic reference for the particular renormalization transformation we do in this unit is Leo Kadanoff's book, Statistical Physics: Statics, Dynamics, and Renormalization -- Chapter 14 covers the clever decimation technique where the new lattice is a 45 degree rotation of the old. It's a lovely introduction to the ways in which smart people flailed around at a problem they didn't quite yet know how to solve.
Ising Model Practical (click link to see)
The Ising model simulation is by Bernd Nottelmann (with some nice interface design by A. Peter Young)
Douglas Ashton's YouTube of life at the critical point is at https://www.youtube.com/watch?v=MxRddFrEnPc
We dealt with the Ising model on the 2D lattice, but what about the Ising model on arbitrary graphs? You can also take a look at a story about renormlization in "self-similar" (fractal) network structures: http://rsif.royalsocietypublishing.org/content/9/74/2131?sa=X&ved=0CDcQ9QEwEGoVChMIqsL9tpT1xgIVwu0UCh2bRwOT (and many references therein) -- or, if you really want to dig down into the different ways of modelling the Ising model on arbitrary graphs, consider the Linked-Cluster Expansion http://tuvalu.santafe.edu/~simon/wortis-1974.pdf
A great deal of work on the Ising model on arbitrary graphs is done by simulation (see the first link, the "Ising Model practical" for how this works).
Effective Theories for Circuits and Automata and references therein. See, in particular, Attila Egri-Nagy's papers with Nehaniv, as well as Oded Maler “On the Krohn-Rhodes Cascaded Decomposition Theorem”. For a wild ride, take a look at John Rhodes' Applications of Automata Theory and Algebra (which used to be passed around in mimeographed form as "the wild book").
The clearest description of the decomposition I've been able to find (other than Attila's papers) is Holcombe's book Algebraic Automata Theory from Cambridge University Press.
Visual Group Theory. A lovely way to understand groups through network diagrams---the full creature bestiary. Fundamental theorems in group theory are translated into statements about a subclass of networks.
A slow, and clear version of the plasma story, with all the mathematical steps laid out, can be found at Richard Fitzpatrick's site at the University of Texas: http://farside.ph.utexas.edu/teaching/plasma/Plasmahtml/node7.html
"The renormalization group and effective field theories", by two philosophers of science, tells the story of renormalization in a high-level fashion http://link.springer.com/article/10.1007%2FBF01063904?LI=true ; you can also take a look at Michael E. Fisher's article "Renormalization group theory: its basis and formulation in statistical physics", which appears in Tian Yu Cao's edited volume, Conceptual Foundations of Quantum Field Theory
A conceptual account of the relationship between renormalization in physics, and in other fields, can be found in the review Major Transitions in Political Order; see also the references at the beginning of section 1.
Major Transitions in Political Order discusses the coarse-graining procedure in social and cognitive environments where we have to consider not just how to construct a model, but also which of many possible coarse-grainings we might want to select.
The article Optimal high-level descriptions of dynamical systems makes explicit the coarse-graining and renormalization story, showing how desires for accuracy and for efficient prediction and modeling can trade off each other.
Both of these articles can be found in the book From Matter to Life: Information and Causality, from Cambridge University Press, edited by Sara Imari Walker, Paul C. W. Davies and George F. R. Ellis.
In The evolution of lossy compression, Sarah Marzen and I dig into the rate-distortion story; see that article and references therein for an introduction to rate-distortion -- and an account of how different utility functions (distortions) lead to different dynamics for the evolution of coarse-graining systems. A lovely account of the utility function aspect of this theory, and how it connects to "live" human behavior can be found in an article by Chris R. Sims in the journal Cognition: Rate–distortion theory and human perception