- Santa Fe Institute / Complexity Explorer /
- 16 Jan 2018
- Computation
Dr. Joshua Grochow is no slouch – his robust CV lists him as a former Santa Fe Institute Omidyar Fellow, current Assistant Professor of Computer Science and Mathematics at CU Boulder and author or co-author of five published papers – all within 2017! In the midst of this bonkers schedule he also carved out time to create a tutorial for you all, the upcoming Introduction to Computation Theory. Read on to find out why he loves studying computation, what he’s currently researching and his thoughts on the Computation Science research field at large.
How did you get into Computation Theory?
As many people did, I first got into computers when I was a kid through playing computer games; I thought I wanted to make computer games when I was older so I started to program. Later in college I thought I wanted to do math for a living and it wasn't until my junior year when I took a theory of computation course that I realized this was a whole branch of math that's also a branch of computer science about computing, and I never looked back.
What do you like about computation theory?
I like that it's a branch of math that's simultaneously very mathematically rich and deep, things mathematicians tend to like, but also says something about the real world. While there are some branches of math that are many levels deep in abstraction that may someday tell us something about the real world that we can’t actually see now, the field of theory of computation still has that depth but is closer to things that are actually happening in the world.
What do you work on in your own research?
I spend a lot of time working on questions about absolute resource limitations on algorithms. For any given problem, what's the minimum amount of time needed to spend on it? What's the minimum amount of space or memory needed? My research tends to have more algebraic flavor, where I bring in more elements of pure mathematics to study these things. It might seem like a negative endeavor, but there are at least two ways it's useful.
When you find an absolute resource limitation, which are common but hard to prove, it means that you need to stop trying to solve that problem because the resource limitations are such that you're not going to be able to find an answer. The other way studying resource limitations is useful is that if you can prove that certain problems are hard to compute, you can often use those for applications in cryptography. Cryptography relies on problems where it's easy to compute the function, but hard to compute the inverse. Proving that a function is hard to compute is really the basis of modern cryptography.
It's a funny endeavor to look at limitations and what you can't do, but on the other hand there have been these surprising connections. When you can prove that an equation is hard to compute then you can actually use that to compute other things.
With the ubiquity of computing machines in our lives, what are some more contemporary areas of research in computation theory?
Computation theory began by asking what things couldn’t be computed in an absolute sense - as in finding problems that no algorithm could solve. The field began to transition around the 60s and 70s when people started asking a different question - this mathematical problem is formally computable, but can I actually do this on a computer? It may take so much time or so much memory that even though technically it's computable, we could never actually do it in practice. It became more about limitations on computing, which has been a big area in computation for the last 40 years. There’s also a big push to study the different resource limitations on cloud computing, and researchers are looking back at models that were popular about 20 years ago to see if they might work better than new models for this purpose.
Do you have a feel for where computation theory is moving?
Sometimes we can prove or conjecture that some problem is hard, meaning that there's some input to this algorithm that forces it to use, say, exponential time or some other complicated variable. However a lot of times in practice you very often don't see the hardest version of that problem, and we can solve a lot of algorithmic problems that are too difficult in the worst case, and people are coming up with theories that explain this phenomena. What is it about the problems that we see in practice that makes them easier? There are tons of theoretical questions there but we're starting to get a handle on what’s happening here.
How does computational theory apply in complexity studies?
I like to think of an algorithm as a complex system itself - it's a recipe for some dynamics happening inside of a computer. Dynamical systems can either be simple or complex, and the behavior that you see in algorithms can be essentially as complex as any system that you might study. This is partly because you can often write an algorithm that simulates a complex system you're interested in, and the behavior of the algorithm is essentially mimicking the complex system.
In that sense, algorithms are a sort of universal class of complex system. The nice thing about the theory of computation is that unlike many other areas of complex systems, the model and the real system are in very close agreement. When modeling something like a brain or an economy, we know that we are using crude approximations. It's a good area for developing mathematical theory, because you don't have to reflect a real-world system.
What should a student know before watching your tutorials?
Algorithms are phrased in the language of mathematics and are typically about a mathematical object, so while you don't have to take a course in, say, graph theory to be able to take this tutorial, it's probably helpful if you know what a graph is. At a couple of points I reference some linear algebra. If you’re familiar with it you'll have a better sense of what's happening in the tutorial. Generally, if you've taken a math course that involved any form of proof, that's helpful because part of the point of the tutorial is learning how to do things formally. If you don't, you often get caught in intuitive seeming catch-22s, where if you just wrote it down in formal mathematical proof-style it would make sense. The ideas we really care about will come out of that formalism.
What will a student get out of this tutorial?
I tried to cover a selection of topics that gives you a feel for the theory of computation and computational complexity, with an eye towards things that might actually be useful without becoming a researcher. I incorporated a lot of key examples where the formal mathematics of the theory of computation has really changed our intuitive picture of what the world looks like. At the same time, I tried to give students some tools for trying to compute a hard problem in practice.
Where would someone who wants to pursue theory of computation go from here?
While we covered a fair array of topics that I think gives a good feel for the subject, it's still a tiny portion of the topics covered in computation theory, algorithms and computational complexity. I tried to hit the highlights.
We didn't get to talk about algorithmic information theory or Komogorov complexity or Chaitin complexity, but that's another interesting area. Rather than resource limitations, it's about the shortest program that describes a given system. This parallels information theory but with a computational lens on it.