Image
alt="Using graph neural networks (GNNs) allows points to “communicate” and self-optimize for better uniformity. Their approach helps optimize point placement to handle complex, multi-dimensional problems necessary for accurate simulations (Image: Alex Shipps/MIT CSAIL)."
CSAIL article

Imagine you’re tasked with sending a team of football players onto a field to assess the condition of the grass (a likely task for them, of course). If you pick their positions randomly, they might cluster together in some areas while completely neglecting others. But if you give them a strategy, like spreading out uniformly across the field, you might get a far more accurate picture of the grass condition.

Image
alt="A new technique could help people determine whether to trust an AI model’s predictions (Image: MIT News; iStock)."
CSAIL article

Because machine-learning models can give false predictions, researchers often equip them with the ability to tell a user how confident they are about a certain decision. This is especially important in high-stake settings, such as when models are used to help identify disease in medical images or filter job applications.

Image
MosaicML (L-R): Naveen Rao, Michael Carbin, Julie Shin Choi, Jonathan Frankle, and Hanlin Tang (Credit: Courtesy of MosaicML).
CSAIL article

The impact of artificial intelligence will never be equitable if there’s only one company that builds and controls the models (not to mention the data that go into them). Unfortunately, today’s AI models are made up of billions of parameters that must be trained and tuned to maximize performance for each use case, putting the most powerful AI models out of reach for most people and companies.