A good mystery can often be the impetus for life-long research. Algorithms, which are rife with mysteries, leave much to be examined in the field of computer science, allowing for new opportunities and discoveries. If the goal is to make algorithms more efficient (even optimal) for wide-ranging applications, we need to be able to solve certain mysteries and uncover the extent of their capabilities — and their limitations.

As someone who works on many projects in algorithm design and complexity theory, Professor Ryan Williams of MIT CSAIL seeks to improve existing algorithms while exploring their limitations. Specifically, he is interested in complexity lower bounds, which mathematically prove limitations on the power of algorithms. Prof. Williams says, in fact, that “the problem of proving lower bounds is among the greatest scientific mysteries of our time.”

Computer scientists currently know significantly more about efficient algorithms than they do about lower bounds. Prof. Williams explains that “it is likely that mathematics will need to develop new forms of reasoning to begin truly understanding computation.” In the course of his work with lower bounds, he has come closer than most to finding these forms of reasoning. He has been able to find results that show how slightly improved algorithms for analyzing neural networks, circuits, and other computational models directly imply limitations (lower bounds) on the expressive power of such networks or circuits.

“We seem to know a lot about what computers can do, but have little idea of what they can’t do,” he says. “There are many problems which are hard for computers to solve today, such as cracking the encryption used for online commerce. But will these problems be hard tomorrow? The embarrassing reality is that we believe it’s unlikely, but we don’t know for sure.”

A famous example of a lower bound problem is the “P vs. NP problem, one of the seven Clay Millennium ‘million-dollar’ problems in mathematics,” says Prof. Williams. Here, P refers to a class of problems that are relatively easy to solve, while NP refers to a class of problems that are, by all appearances, extremely hard to solve. The theory goes that if P = NP, then a wide range of problems that currently appear hard are actually easy. Most computer scientists think that actually, P does not equal NP, but so far, no one has been able to prove it. But Prof. Williams has made strides toward proving P ≠ NP in his computational complexity research.

“We don’t know if NP problems (such as those underlying cryptography) are fundamentally difficult, or if humanity hasn’t found the right clever program yet which breaks all past (and future) encryption schemes. By proving that certain problems have no clever program, we obtain a deeper understanding of computing and a sturdier foundation for an information society,” states Prof. Williams.

Bearing this in mind, the Complexity Theory Group in CSAIL works on many different problems related to the difficulty of computing and the nature of algorithms.

“MIT is the best place on earth for studying theoretical computer science,” Prof. Williams says. He and his research group build on the foundational work many MIT CSAIL members have done in complexity theory. Current projects include fine-grained complexity studies, circuit complexity, the minimum circuit-size problem, and the “hardness magnification” phenomenon.

While working on computational complexity lower bounds is a difficult and slow process, this important work could impact all of computing long-term, including applications in the best possible cryptography and cybersecurity. This technology ultimately “would mean that we fully understand computing, in that we can know precisely how much time, space, and resources are required to solve a given problem in the real world.”