Image
Three new frameworks from MIT CSAIL reveal how natural language can provide important context for language models that perform coding, AI planning, and robotics tasks (Credit: Alex Shipps/MIT CSAIL, with components from the researchers and Pixabay).
CSAIL article

Large language models (LLMs) are becoming increasingly useful for programming and robotics tasks, but for more complicated reasoning problems, the gap between these systems and humans looms large. Without the ability to learn new concepts like humans do, these systems fail to form good abstractions — essentially, high-level representations of complex concepts that skip less-important details — and thus sputter when asked to do more sophisticated tasks.

Image
alt="A team of MIT researchers found highly memorable images have stronger and sustained responses in ventro-occipital brain cortices, peaking at around 300ms. Conceptually similar but easily forgettable images quickly fade away (Credits: Alex Shipps/MIT CSAIL)."
CSAIL article

For nearly a decade, a team of MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) researchers have been seeking to uncover why certain images persist in a people's minds, while many others fade. To do this, they set out to map the spatio-temporal brain dynamics involved in recognizing a visual image. And now for the first time, scientists harnessed the combined strengths of magnetoencephalography (MEG), which captures the timing of brain activity, and functional magnetic resonance imaging (fMRI), which identifies active brain regions, to precisely determine when and where the brain processes a memorable image.

Image
robot hand butterfly landing
External articles

Daniela Rus is a pioneering roboticist and a professor of electrical engineering and computer science at MIT. She is the director of the Computer Science and Artificial Intelligence Laboratory. She is also a member of the National Academy of Engineering, the American Academy of Arts and Sciences, and a MacArthur Fellow.