Image
alt=" CSAIL framework reduces bias, treats comparable individual users similarly."
CSAIL article

Two of the trickiest qualities to balance in the world of machine learning are fairness and accuracy. Algorithms optimized for accuracy may unintentionally perpetuate bias against specific groups, while those prioritizing fairness may compromise accuracy by misclassifying some data points.

Image
alt="The “Diffusion Forcing” method can sort through noisy data and reliably predict the next steps in a task, helping a robot complete manipulation tasks, for example. In one experiment, it helped a robotic arm rearrange toy fruits into target spots on circular mats despite starting from random positions and visual distractions (Credits: Mike Grimmett/MIT CSAIL)."
CSAIL article

In the current AI zeitgeist, sequence models have skyrocketed in popularity for their ability to analyze data and predict what to do next. For instance, you’ve likely used next-token prediction models like ChatGPT, which anticipate each word (token) in a sequence to form answers to users’ queries. There are also full-sequence diffusion models like Sora, which convert words into dazzling, realistic visuals by successively “denoising” an entire video sequence

frontier AI

Frontier AI Safety & Policy Panel: Where We're at & Where We're Headed – Perspectives from the UK

It's been around a year since chatbots became widespread and governments worldwide turned their attention to advanced AI safety and governance. In this event co-hosted by MIT CSAIL Alliances, the MIT-UK program and the UK government’s AI Safety Institute, we will discuss the current state of research and where we're headed. Questions to be answered include: How will we control and govern AI agents?

 

Image
Figure 1: Schematic overview of the framework for on-road evaluation of explanations in automated vehicles (Credit: MIT CSAIL and GIST).
CSAIL article

The Proceedings of the ACM on Interactive, Mobile, Wearable, and Ubiquitous Technologies (IMWUT) Editorial Board has awarded MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Gwangju Institute of Science and Technology (GIST) researchers with a Distinguished Paper Award for their evaluation of visual explanations in autonomous vehicles’ decision-making.

Andrew Lo
Charles E. and Susan T. Harris Professor, CSAIL Principal Investigator
AI & Machine Learning