Image
“Personhood credentials allow you to prove you are human without revealing anything else about your identity,” says Tobin South (Credits: MIT News; iStock).
CSAIL article

As artificial intelligence agents become more advanced, it could become increasingly difficult to distinguish between AI-powered users and real humans on the internet. In a new white paper, researchers from MIT, OpenAI, Microsoft, and other tech companies and academic institutions propose the use of personhood credentials, a verification technique that enables someone to prove they are a real human online, while preserving their privacy.

Image
alt="The automated, multimodal approach developed by MIT researchers interprets artificial vision models that evaluate the properties of images (Credits: iStock)."
CSAIL article

As artificial intelligence models become increasingly prevalent and are integrated into diverse sectors like health care, finance, education, transportation, and entertainment, understanding how they work under the hood is critical. Interpreting the mechanisms underlying AI models enables us to audit them for safety and biases, with the potential to deepen our understanding of the science behind intelligence itself.

Image
MosaicML (L-R): Naveen Rao, Michael Carbin, Julie Shin Choi, Jonathan Frankle, and Hanlin Tang (Credit: Courtesy of MosaicML).
CSAIL article

The impact of artificial intelligence will never be equitable if there’s only one company that builds and controls the models (not to mention the data that go into them). Unfortunately, today’s AI models are made up of billions of parameters that must be trained and tuned to maximize performance for each use case, putting the most powerful AI models out of reach for most people and companies.