Image
alt="Using graph neural networks (GNNs) allows points to “communicate” and self-optimize for better uniformity. Their approach helps optimize point placement to handle complex, multi-dimensional problems necessary for accurate simulations (Image: Alex Shipps/MIT CSAIL)."
CSAIL article

Imagine you’re tasked with sending a team of football players onto a field to assess the condition of the grass (a likely task for them, of course). If you pick their positions randomly, they might cluster together in some areas while completely neglecting others. But if you give them a strategy, like spreading out uniformly across the field, you might get a far more accurate picture of the grass condition.

Image
alt="The “Faces in Things” dataset is a comprehensive, human-labeled collection of over 5,000 pareidolic images. The research team trained face-detection algorithms to see faces in these pictures, giving insight into how humans learned to recognize faces within their surroundings (Credits: Alex Shipps/MIT CSAIL)."
CSAIL article

In 1994, Florida jewelry designer Diana Duyser discovered what she believed to be the Virgin Mary’s image in a grilled cheese sandwich, which she preserved and later auctioned for $28,000. But how much do we really understand about pareidolia, the phenomenon of seeing faces and patterns in objects when they aren’t really there? 

Image
“Personhood credentials allow you to prove you are human without revealing anything else about your identity,” says Tobin South (Credits: MIT News; iStock).
CSAIL article

As artificial intelligence agents become more advanced, it could become increasingly difficult to distinguish between AI-powered users and real humans on the internet. In a new white paper, researchers from MIT, OpenAI, Microsoft, and other tech companies and academic institutions propose the use of personhood credentials, a verification technique that enables someone to prove they are a real human online, while preserving their privacy.

Image
alt="The automated, multimodal approach developed by MIT researchers interprets artificial vision models that evaluate the properties of images (Credits: iStock)."
CSAIL article

As artificial intelligence models become increasingly prevalent and are integrated into diverse sectors like health care, finance, education, transportation, and entertainment, understanding how they work under the hood is critical. Interpreting the mechanisms underlying AI models enables us to audit them for safety and biases, with the potential to deepen our understanding of the science behind intelligence itself.