Russell Impagliazzo (UC San Diego)
A randomized algorithm is a procedure for solving problems that, at certain steps, flips a random coin, heads or tails, rather than performing a fixed operation. Some of the most famous randomized algorithms, such as for telling whether a long integer is prime, or verifying an immensely complex equation, were invented in the 1970's. Since then, deterministic analogs, or "de-randomizations’', have been found for some of these algorithms (such as prime testing), but not for others (such as verifying equations). Complexity theorists have wondered: Why would the ability to make random choices be helpful in solving rigorously defined mathematical problems? When is randomness helpful to solve problems, and by how much? Do such algorithms represent a new, more powerful model of computation, or are they merely a different way of looking at deterministic algorithms?
While these questions remain open, we actually know many non-trivial relationships between randomized and deterministic computation. These connections point to an answer, albeit one that is unsatisfactory in many senses. Namely, it is almost certain that all randomized algorithms can be derandomized (with polynomial overhead). But to do so, we also need to resolve one of the other major problems in complexity: proving lower bounds on circuit size.
This talk will survey the research connecting "hardness'' (circuit lower bounds) to "randomness'', and show how the known results refute many "common sense'' intuitions about the power of randomness. While we won't delve into many technical details, we will show how progress in this area draws on ideas from cryptography, error-correction, game theory and learning.
Light refreshments will be served before the lecture at 3:30 p.m.
Attachment | Size |
---|---|
Does Randomness Solve Problems? | 886.01 KB |