Complexity Explorer Santa Few Institute

Introduction to Computation Theory

Lead instructor:

Your progress is not being saved! Enroll now or log in to track your progress or submit homework.

8.4 Unit Quiz » Quiz Explanation

1) Given a function f which takes a finite string as input and produces a finite string as output, a randomized Turing machine M is defined to compute f if, for any input string x, $Pr(M(x)=f(x)) \geq 2/3$, where the probability here is taken with respect to the coins flipped by M on input x For deterministic functions from finite strings to finite strings, randomized Turing machines can compute some function that ordinary Turing machines cannot.

False - Given a randomized Turing machine M, an ordinary Turing machine can simulate M with all possible outcomes of its coin flips, and find the unique value that is output by M on input x on at least 2/3 of the possible outcomes of the coin flips. Thus, for computing deterministic functions, randomized Turing machines still "obey'' the Church--Turing thesis.


2)  Randomized algorithms are always faster than deterministic algorithms.

False - While some randomized algorithms are faster, others need not be. When to use randomness in theory is a question of what resource you are trying to minimize---for example, perhaps there is a randomized algorithm for some problem that is slower, but also uses less memory. In practice, when to use randomness is a more complicated question of balancing several different concerns, including programming errors, reliability, reproducibility, robustness, and resource usage.


3) Randomized algorithms are always simpler, therefore easier to code (without making as many programming errors).

Again, while some randomized algorithms are simpler than their deterministic counterparts, this need not always be the case. Sometimes, in order to get a randomized algorithm that runs faster than its deterministic counterpart requires a lot more work on the effort of the programmer/mathematician!


4) In most computers, the rand() function (or the equivalent in your favorite programming language) produces: (a) truly, physically random numbers, (b) deterministic numbers that try to look random but don't do a very good job, (c) deterministic but \pseudo-random" numbers, that are \random-looking" enough for most (but not all) purposes.

While some modern chips are starting to take advantage of thermal or quantum "noise'' to produce random bits that are nearly truly random in a physical sense, this is not very well-developed yet, nor widespread. Most modern computers use deterministic pseudo-random number generators, together with a "seed'' which is taken from something "more random,'' such as a physical source of randomness, or the last several bits of the computers internal clock.


5) If $\mathsf{P} \neq \mathsf{NP}$, then the hardness versus randomness principle implies that good psuedo-random generators exist, and therefore that $\mathsf{BPP} = \mathsf{P}$.

False - Such an implication is currently unknown. The current hardness versus randomness results rely on a slightly stronger assumption than just $\mathsf{P} \neq \mathsf{NP}$ in order to imply that $\mathsf{BPP} = \mathsf{P}$ (but one that is also fairly widely believed).