On a research cruise around Hawaii in 2018, Yuening Zhang SM ’19, PhD ’24 saw how difficult it was to keep a tight ship. The careful coordination required to map underwater terrain could sometimes led to a stressful environment for team members, who might have different understandings of which tasks must be completed in spontaneously changing conditions. During these trips, Zhang considered how a robotic companion could have helped her and her crewmates achieve their goals more efficiently.
As artificial intelligence agents become more advanced, it could become increasingly difficult to distinguish between AI-powered users and real humans on the internet. In a new white paper, researchers from MIT, OpenAI, Microsoft, and other tech companies and academic institutions propose the use of personhood credentials, a verification technique that enables someone to prove they are a real human online, while preserving their privacy.
Ask a large language model (LLM) like GPT-4 to smell a rain-soaked campsite, and it’ll politely decline. Ask the same system to describe that scent to you, and it’ll wax poetic about “an air thick with anticipation" and “a scent that is both fresh and earthy," despite having neither prior experience with rain nor a nose to help it make such observations.
As organizations rush to implement artificial intelligence (AI), a new analysis of AI-related risks finds significant gaps in our understanding, highlighting an urgent need for a more comprehensive approach.