MIT researchers have created a periodic table that shows how more than 20 classical machine-learning algorithms are connected. The new framework sheds light on how scientists could fuse strategies from different methods to improve existing AI models or come up with new ones.
Data privacy comes with a cost. There are security techniques that protect sensitive user data, like customer addresses, from attackers who may attempt to extract them from AI models — but they often make those models less accurate.
Agentic AI systems are “designed to pursue complex goals with autonomy and predictability” (MIT Technology Review). Agentic AI models enable productivity by taking goal-directed actions, making contextual decisions, and adjusting plans based on changing conditions with minimal human oversight.
More than seven years ago, cybersecurity researchers were thoroughly rattled by the discovery of Meltdown and Spectre, two major security vulnerabilities uncovered in the microprocessors found in virtually every computer on the planet.
20 years ago in a pre-ChatGPT world, a fake-paper generator created by 3 MIT kids fooled a major conference so badly that they had to completely reconfigure their reviewing practices.
A computation has two main constraints: the amount of memory a computation requires and how long it takes to do that calculation. If a task requires a certain number of steps, at worst the computer will need to access its memory for each one, meaning it'll require the same number of memory slots.
While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio.
Proteins are the workhorses that keep our cells running, and there are many thousands of types of proteins in our cells, each performing a specialized function. Researchers have long known that the structure of a protein determines what it can do.
Not sure what to think about DeepSeek R1, the most recent large language model (LLM) making waves in the global tech community? Faculty from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are here to help!
In a two-part series, MIT News explores the environmental implications of generative AI. In this article, we look at why this technology is so resource-intensive. A second piece will investigate what experts are doing to reduce genAI’s carbon footprint and other impacts.