Data privacy comes with a cost. There are security techniques that protect sensitive user data, like customer addresses, from attackers who may attempt to extract them from AI models — but they often make those models less accurate.
More than seven years ago, cybersecurity researchers were thoroughly rattled by the discovery of Meltdown and Spectre, two major security vulnerabilities uncovered in the microprocessors found in virtually every computer on the planet.
A hospital that wants to use a cloud computing service to perform artificial intelligence data analysis on sensitive patient records needs a guarantee those data will remain private during computation. Homomorphic encryption is a special type of security scheme that can provide this assurance.
Imagine you’re a chef with a highly sought-after recipe. You write your top-secret instructions in a journal to ensure you remember them, but its location within the book is evident from the folds and tears on the edges of that often-referenced page.
This past month Martin Rinard, MIT professor in the Electrical Engineering and Computer Science Department (EECS) and CSAIL principal investigator, received the 2025 Outstanding Research Award from the ACM Special Interest Group on Software Engineering (SIGSOFT). The organization awarded him for his “fundamental contributions in pioneering the new fields of program repair and approximate computing.”
Not sure what to think about DeepSeek R1, the most recent large language model (LLM) making waves in the global tech community? Faculty from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are here to help!
If you’ve watched cartoons like Tom and Jerry, you’ll recognize a common theme: An elusive target avoids his formidable adversary. This game of “cat-and-mouse” — whether literal or otherwise — involves pursuing something that ever-so-narrowly escapes you at each try.
"The net effect [of DeepSeek] should be to significantly increase the pace of AI development, since the secrets are being let out and the models are now cheaper and easier to train by more people." ~ Associate Professor Phillip Isola