Image
“Personhood credentials allow you to prove you are human without revealing anything else about your identity,” says Tobin South (Credits: MIT News; iStock).
CSAIL article

As artificial intelligence agents become more advanced, it could become increasingly difficult to distinguish between AI-powered users and real humans on the internet. In a new white paper, researchers from MIT, OpenAI, Microsoft, and other tech companies and academic institutions propose the use of personhood credentials, a verification technique that enables someone to prove they are a real human online, while preserving their privacy.

Image
alt="A new technique could help people determine whether to trust an AI model’s predictions (Image: MIT News; iStock)."
CSAIL article

Because machine-learning models can give false predictions, researchers often equip them with the ability to tell a user how confident they are about a certain decision. This is especially important in high-stake settings, such as when models are used to help identify disease in medical images or filter job applications.

Image
alt="The dataset contains movements and physiological responses of badminton players and can be used to build AI-driven coaching assistants. This development could improve the quality of forehand clear and backhand drive strokes across all skill levels, from beginners to experts (Credit: SeungJun Kim at GIST)."
CSAIL article

In sports training, practice is the key, but being able to emulate the techniques of professional athletes can take a player’s performance to the next level. AI-based personalized sports coaching assistants assist with this by utilizing published datasets. With cameras and sensors strategically placed on the athlete's body, these systems can track everything, including joint movement patterns, muscle activation levels, and gaze movements.

Image
alt="MIT researchers’ "consensus game" is a game-theoretic approach for language model decoding. The equilibrium-ranking algorithm harmonizes generative and discriminative querying to enhance prediction accuracy across various tasks, outperforming larger models and demonstrating the potential of game theory in improving language model consistency and truthfulness (Credits: Alex Shipps/MIT CSAIL)."
CSAIL article

Imagine you and a friend are playing a game where your goal is to communicate secret messages to each other using only cryptic sentences. Your friend's job is to guess the secret message behind your sentences. Sometimes, you give clues directly, and other times, your friend has to guess the message by asking yes-or-no questions about the clues you've given. The challenge is that both of you want to make sure you're understanding each other correctly and agreeing on the secret message.

Image
Three new frameworks from MIT CSAIL reveal how natural language can provide important context for language models that perform coding, AI planning, and robotics tasks (Credit: Alex Shipps/MIT CSAIL, with components from the researchers and Pixabay).
CSAIL article

Large language models (LLMs) are becoming increasingly useful for programming and robotics tasks, but for more complicated reasoning problems, the gap between these systems and humans looms large. Without the ability to learn new concepts like humans do, these systems fail to form good abstractions — essentially, high-level representations of complex concepts that skip less-important details — and thus sputter when asked to do more sophisticated tasks.