Image
alt="The automated, multimodal approach developed by MIT researchers interprets artificial vision models that evaluate the properties of images (Credits: iStock)."
CSAIL article

As artificial intelligence models become increasingly prevalent and are integrated into diverse sectors like health care, finance, education, transportation, and entertainment, understanding how they work under the hood is critical. Interpreting the mechanisms underlying AI models enables us to audit them for safety and biases, with the potential to deepen our understanding of the science behind intelligence itself.

Image
alt="MIT CSAIL researchers helped design a new technique that can guarantee the stability of robots controlled by neural networks. This development could eventually lead to safer autonomous vehicles and industrial robots (Credits: Alex Shipps/MIT CSAIL)."
CSAIL article

Neural networks have made a seismic impact on how engineers design controllers for robots, catalyzing more adaptive and efficient machines. Still, these brain-like machine-learning systems are a double-edged sword: Their complexity makes them powerful, but it also makes it difficult to guarantee that a robot powered by a neural network will safely accomplish its task.

Image
alt="A new technique could help people determine whether to trust an AI model’s predictions (Image: MIT News; iStock)."
CSAIL article

Because machine-learning models can give false predictions, researchers often equip them with the ability to tell a user how confident they are about a certain decision. This is especially important in high-stake settings, such as when models are used to help identify disease in medical images or filter job applications.