Written by: Audrey Woods
For the past fifty years or so, computer scientists have been able to rely on consistent and reliable improvements in computer performance as the natural state of the industry. Thanks to a trend observed by engineer and businessman Gordon Moore in 1965, the number of components on a semiconductor chip—the backbone of modern computers—doubled every two years, a rule of thumb that came to be known as Moore’s Law. Engineers didn’t have to worry too much about the efficiency of their code, since, in just a few years, the hardware would catch up. Businesses could reliably imagine that any technology they used and depended on would get faster, cheaper, and better if given a little bit of time. In many ways, we have Moore’s Law to thank for the gold rush of digital technology that has defined recent history.
Unfortunately, Moore’s Law is over. While miniaturization is still happening and transistors are getting smaller, the rate of change is no longer a reliable trend. For example, Intel took five years to go from 14-nanometer technology (2014) to 10-nanometer technology (2019). While this is concerning news, it’s hardly surprising as there are physical limitations to how many circuits can possibly fit on a transistor since wires can’t shrink down smaller than the size of a single atom.
Understanding what this change means for the computer science industry, MIT Professor and lifelong educator Charles E. Leiserson is working on solutions that will support continued improvements in computing performance and help today’s engineers and programmers adjust to and thrive in a post-Moore era.
Finding His Interest
When asked what first drew him to the field of computer science, Professor Leiserson says that he actually began with an interest in math. As a high schooler, he was a fan of Martin Gardner’s “Mathematical Games” in Scientific American, reading every month’s column and even going back to read the older versions in his father’s magazine collection dating back to the early 1950’s. While the column focused mostly on discrete mathematics, there were many subjects covered that would become relevant to Professor Leiserson’s later work, such as magic tricks based on binary numbers, cards to perform operations involving propositional logic, and even early coverage of the RSA public-key cryptosystems invented by his now-colleague, MIT Professor Ronald Rivest.
One day, as Professor Leiserson tells it, he saw an ad in Scientific American for a kit to build a DIGI-COMP I, which was a toy “computer” created to introduce children to the machines that had made space flight possible. While it was a far cry from the digital toys of today, the DIGI-COMP I could complete rudimentary computations such as addition, subtraction, and even simple logic games. Thrilled by the idea, a young Professor Leiserson saved up his allowance to buy it for $3.99 and proceeded to “spend hours with that little device.” He later took his first programming class using a computer at a neighboring school that he could access an hour every week. Even so, Professor Leiserson says, “I didn’t switch [from mathematics] to computer science until my senior year of college,” when he became discouraged about doing math for math’s sake and forsook graduate school because of his mediocre college record. He figured by switching to computer science, he could get a decent job programming and had had good experiences playing with the computer science department’s time-shared computer.
Now, as an MIT Professor of Computer Science and Engineering, Professor Leiserson has enjoyed a long history of educating others in this growing field. He has graduated over two dozen PhD students, supervised more than 60 master’s and bachelor’s theses, and recently, the blockbuster textbook he coauthored with previous graduate student Thomas Cormen of Dartmouth, Professor Rivest of MIT, and Cliff Stein of Columbia University titled Introduction to Algorithms passed its 1 millionth sale, making it the bestselling book of the MIT Press. Professor Leiserson says that the textbook was born out of a simple need, since, when he first started lecturing MIT’s undergraduate algorithms class, there was no one book covering all the algorithms he wanted to cover. Combining notes from his own lectures with old notes from previous instructors, he began to collect a framework that would become the basis of the popular textbook. But the project was set into motion when one of his graduate students—his now-coauthor Professor Cormen—did a “magnificent job” writing up notes from that year’s course, prompting them to begin the collaborative process.
Overall, Professor Leiserson jokes that computer science hasn’t changed all that much since he first joined it. While the tools and technology might be different, the challenges are similar to what they were when he first started. One major change Professor Leiserson has noticed is how the field is perceived. He describes arguing with Turing Award and Nobel Prize winner Herbert Simon who, in the mid-1990’s, claimed there wasn’t much to computer science and he’d rather work with students trained in other scientific practices. Professor Leiserson argues—and said to Simon at the time—that such thinking was outdated. “Computer science went from a small intellectual niche to providing both economic advantage and a deep understanding of the world.” These days, Professor Leiserson says, “computational thinking is now a part of virtually every academic field.”
New Tools for Old Problems: Software Performance Engineering
A large portion of Professor Leiserson’s current research is focused on ways to improve computer performance without relying on the dwindling returns of Moore’s Law. Historically, programmers didn’t need to worry too much about efficiency because the ever-improving technology would catch up. This led to choices that prioritized productivity over performance, such as using sub-optimal coding practices or applying code designed for one problem to a different problem where it doesn’t perform as well. But now, especially with compute-heavy subjects like generative AI and large language models on the rise, the cost of such inefficiency can be enormous and is less likely to be solved by simply waiting for better technology. Programmers need to be more strategic about the methods they use at the “top” of the computing stack, meaning software, algorithms, and hardware architecture, as Professor Leiserson and his colleagues laid out in a 2020 paper published in Science.
Toward this end, Professor Leiserson leads the Supertech Research Group, along with CSAIL Research Scientists Neil Thompson and Tao Schardl, which is focused on investigating “the technologies that support scalable, high-performance computing.” These technologies include parallel computing, adaptive computing, race detection, and cache-oblivious algorithms, which exploit the memory hierarchies of computers without needing to know any details of cache sizes. The group’s “signature research contribution,” as Professor Leiserson describes it, is the Cilk multithreaded programming platform—currently available as OpenCilk—which includes a language extension to C/C++, a compiler, a runtime system, libraries, and software productivity tools all designed to make writing efficient software easier.
Ultimately, Professor Leiserson’s goal is to “make a world where it’s easy and fun to write fast code.”
Professor Leiserson recognizes the importance of such research, saying, “the only way to get more computing capacity today is to build bigger, more energy-consuming machines. If we’re in an AI arms race with our adversaries, it could have a dramatically bad impact on climate.” Even seemingly minor improvements in code performance could have massive implications for gigantic programs like generative AI models or the enormous data centers of Google, Amazon, and Apple.
Because of this, Professor Leiserson is passionate about establishing software performance engineering as a standard part of computer science education. He points out that “MIT is the only place I know that has been teaching software performance engineering over the last decade. There are no conferences or journals specifically devoted to it. We want to create an environment where community members can access and contribute to educational materials, research, infrastructure, and practical experience.”
Most importantly, Professor Leiserson wants fast computation to be accessible and understandable. “I don't want it to be the domain of high priests privy to arcane knowledge about how to make code run fast,” he says. “I want it to be scientific, open, and democratic, easily practiced by all programmers.” With so much experience as an educator and innovator, Professor Leiserson is well-suited to both create and raise awareness about better techniques in performance engineering.