Programmers can now use large language models (LLMs) to generate computer code more quickly. However, this only makes programmers’ lives easier if that code follows the rules of the programming language and doesn’t cause a computer to crash.
As a college student in Serbia with a passion for math and physics, Ana Trišović found herself drawn to computer science and its practical, problem-solving approaches. It was then that she discovered MIT OpenCourseWare, part of MIT Open Learning, and decided to study a course on Data Analytics with Python in 2012 — something her school didn’t offer.
More than seven years ago, cybersecurity researchers were thoroughly rattled by the discovery of Meltdown and Spectre, two major security vulnerabilities uncovered in the microprocessors found in virtually every computer on the planet.
Many companies invest heavily in hiring talent to create the high-performance library code that underpins modern artificial intelligence systems. NVIDIA, for instance, developed some of the most advanced high-performance computing (HPC) libraries, creating a competitive moat that has proven difficult for others to breach.
This past month Martin Rinard, MIT professor in the Electrical Engineering and Computer Science Department (EECS) and CSAIL principal investigator, received the 2025 Outstanding Research Award from the ACM Special Interest Group on Software Engineering (SIGSOFT). The organization awarded him for his “fundamental contributions in pioneering the new fields of program repair and approximate computing.”
A particular set of probabilistic inference algorithms common in robotics involve Sequential Monte Carlo methods, also known as “particle filtering,” which approximates using repeated random sampling. (“Particle,” in this context, refers to individual samples.) Traditional particle filtering struggles with providing accurate results on complex distributions, giving rise to advanced algorithms such as hybrid particle filtering.
The neural network artificial intelligence models used in applications like medical image processing and speech recognition perform operations on hugely complex data structures that require an enormous amount of computation to process. This is one reason deep-learning models consume so much energy.
"The net effect [of DeepSeek] should be to significantly increase the pace of AI development, since the secrets are being let out and the models are now cheaper and easier to train by more people." ~ Associate Professor Phillip Isola
CSAIL principal investigator Charles Leiserson - a professor in MIT’s department of Electrical Engineering and Computer Science (EECS) - recently co-authored a paper published by the Computing Research Association as part of the first edition of their “Quadrennial Papers,” published every four years.
Are you a CSAIL entrepreneur? Are you curious about the resources that CSAIL Alliances, as well as the rest of the MIT Ecosystem can offer you? Sign up for Office Hours using the form to ask Christiana Kalfas, Sr.