Data privacy comes with a cost. There are security techniques that protect sensitive user data, like customer addresses, from attackers who may attempt to extract them from AI models — but they often make those models less accurate.
A hospital that wants to use a cloud computing service to perform artificial intelligence data analysis on sensitive patient records needs a guarantee those data will remain private during computation. Homomorphic encryption is a special type of security scheme that can provide this assurance.
Imagine you’re a chef with a highly sought-after recipe. You write your top-secret instructions in a journal to ensure you remember them, but its location within the book is evident from the folds and tears on the edges of that often-referenced page.
Not sure what to think about DeepSeek R1, the most recent large language model (LLM) making waves in the global tech community? Faculty from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are here to help!
If you’ve watched cartoons like Tom and Jerry, you’ll recognize a common theme: An elusive target avoids his formidable adversary. This game of “cat-and-mouse” — whether literal or otherwise — involves pursuing something that ever-so-narrowly escapes you at each try.
Frontier AI Safety & Policy Panel: Where We're at & Where We're Headed – Perspectives from the UK
It's been around a year since chatbots became widespread and governments worldwide turned their attention to advanced AI safety and governance. In this event co-hosted by MIT CSAIL Alliances, the MIT-UK program and the UK government’s AI Safety Institute, we will discuss the current state of research and where we're headed. Questions to be answered include: How will we control and govern AI agents?
The most recent email you sent was likely encrypted using a tried-and-true method that relies on the idea that even the fastest computer would be unable to efficiently break a gigantic number into factors.