David Clark, Senior Research Scientist at MIT CSAIL, helped design the system that connects nearly every computer on earth. As Chief Protocol Architect of the Internet from 1981 to 1989, he was there for the beginnings of the Internet. Calling the wave of AI technology an “echo” of what happened in the 80’s, Dr. Clark is cautioning, “ maybe we need to slow things down and think a bit.”
What happens when the team behind PlayStation meets the researchers pushing the boundaries of AI? You get The Nexus of Games and AI, a 12-part MIT Independent Activities Period (IAP) course, now available to stream.
AI models are proliferating fast. There’s Claude, ChatGPT, Gemini, Copilot, DeepSeek, Grok, Mistral, Llama, and many more emerging every day. But which ones to work with? And why? We asked MIT CSAIL faculty and students which AI tools they’re reaching for right now. The responses showed a variety of preferences, a clear winner in one area, and a word of caution about what goes into any public model’s memory.
Designers, makers, and others often use 3D printing to rapidly prototype a range of functional objects, from movie props to medical devices. Accurate print previews are essential so users know a fabricated object will perform as expected.
Each spring, river herring populations migrate from Massachusetts coastal waters to begin their annual journey up rivers and streams to freshwater spawning habitat. River herring have faced severe population declines over the past several decades, and their migration is extensively monitored across the region, primarily through traditional visual counting and volunteer-based programs.
The CSAIL Forum is a monthly series hosted by Professor Daniela Rus, Director of CSAIL. This month features Professor Vincent Sitzmann.
Add to calendarAmerica/New_YorkCSAIL Forum with Vincent Sitzmann04/07/2026
The CSAIL Forum is a monthly series hosted by Professor Daniela Rus, Director of CSAIL. This month features Professor Vincent Sitzmann.
LLMs have ushered in a new era of how humans interact with computers: They can code, write, and automate many of our everyday tasks. Yet, in our physical world, AI has so far *not* delivered autonomy: There does not exist a robot today that can do your cleaning, load your dishwasher, or help the elderly get out of bed: in fact, it remains impossible to automate any but the most restricted and controlled tasks. In this talk, Sitzmann will discuss how the billions of hours of video that mankind has collected are a candidate for ushering in an "LLM moment" of AI that can interact with the physical world, and show recent results from my research group that work towards this goal.
Imagine a world where you could change the designs you see on bags, shirts, and walls whenever you want. Typical clothes would become customizable fashion pieces, while your humble abode could turn into a smart home. That’s the vision of scientists like MIT PhD student Yunyi Zhu ’20, MEng ’21: technology that can “reprogram” the appearance of personal accessories, home decor, and office items.
In high-stakes settings like medical diagnostics, users often want to know what led a computer vision model to make a certain prediction, so they can determine whether to trust its output.