The European Association for Theoretical Computer Science (EATCS) recently awarded Ryan Williams, MIT EECS professor and CSAIL member, with the 2024 Gödel Prize for his 2011 paper, “Non-Uniform ACC Circuit Lower Bounds.” Williams receives this honor for presenting a novel paradigm for a “rich two-way connection" between algorithmic techniques and lower-bound methods.
Imagine a slime-like robot that can seamlessly change its shape to squeeze through narrow spaces, which could be deployed inside the human body to remove an unwanted item.
When MIT professor and now Computer Science and Artificial Intelligence Laboratory (CSAIL) member Peter Shor first demonstrated the potential of quantum computers to solve problems faster than classical ones, he inspired scientists to imagine countless possibilities for the emerging technology. Thirty years later, though, the quantum edge remains a peak not yet reached.
A user could ask ChatGPT to write a computer program or summarize an article, and the AI chatbot would likely be able to generate useful code or write a cogent synopsis. However, someone could also ask for instructions to build a bomb, and the chatbot might be able to provide those, too.
Large language models, such as those that power popular artificial intelligence chatbots like ChatGPT, are incredibly complex. Even though these models are being used as tools in many areas, such as customer support, code generation, and language translation, scientists still don’t fully grasp how they work.
From wiping up spills to serving up food, robots are being taught to carry out increasingly complicated household tasks. Many such home-bot trainees are learning through imitation; they are programmed to copy the motions that a human physically guides them through.
If a robot traveling to a destination has just two possible paths, it needs only to compare the routes’ travel time and probability of success. But if the robot is traversing a complex environment with many possible paths, choosing the best route amid so much uncertainty can quickly become an intractable problem.
In 2012, the best language models were small recurrent networks that struggled to form coherent sentences. Fast forward to today, and large language models like GPT-4 outperform most students on the SAT. How has this rapid progress been possible?
To teach an AI agent a new task, like how to open a kitchen cabinet, researchers often use reinforcement learning — a trial-and-error process where the agent is rewarded for taking actions that get it closer to the goal.