As artificial intelligence agents become more advanced, it could become increasingly difficult to distinguish between AI-powered users and real humans on the internet. In a new white paper, researchers from MIT, OpenAI, Microsoft, and other tech companies and academic institutions propose the use of personhood credentials, a verification technique that enables someone to prove they are a real human online, while preserving their privacy.
Because machine-learning models can give false predictions, researchers often equip them with the ability to tell a user how confident they are about a certain decision. This is especially important in high-stake settings, such as when models are used to help identify disease in medical images or filter job applications.
In sports training, practice is the key, but being able to emulate the techniques of professional athletes can take a player’s performance to the next level. AI-based personalized sports coaching assistants assist with this by utilizing published datasets. With cameras and sensors strategically placed on the athlete's body, these systems can track everything, including joint movement patterns, muscle activation levels, and gaze movements.
Imagine you and a friend are playing a game where your goal is to communicate secret messages to each other using only cryptic sentences. Your friend's job is to guess the secret message behind your sentences. Sometimes, you give clues directly, and other times, your friend has to guess the message by asking yes-or-no questions about the clues you've given. The challenge is that both of you want to make sure you're understanding each other correctly and agreeing on the secret message.
The recent ransomware attack on ChangeHealthcare, which severed the network connecting health care providers, pharmacies, and hospitals with health insurance companies, demonstrates just how disruptive supply chain attacks can be. In this case, it hindered the ability of those providing medical services to submit insurance claims and receive payments.
Large language models (LLMs) are becoming increasingly useful for programming and robotics tasks, but for more complicated reasoning problems, the gap between these systems and humans looms large. Without the ability to learn new concepts like humans do, these systems fail to form good abstractions — essentially, high-level representations of complex concepts that skip less-important details — and thus sputter when asked to do more sophisticated tasks.