Image
“I have such a soft spot for OpenCourseWare — it shaped my career,” says Ana Trišović, a research scientist at MIT CSAIL’s FutureTech lab (Credits: Courtesy of Ana Trišović).
CSAIL article

As a college student in Serbia with a passion for math and physics, Ana Trišović found herself drawn to computer science and its practical, problem-solving approaches. It was then that she discovered MIT OpenCourseWare, part of MIT Open Learning, and decided to study a course on Data Analytics with Python in 2012 — something her school didn’t offer.

Image
Martin Rinard, MIT professor and CSAIL principal investigator.
CSAIL article

This past month Martin Rinard, MIT professor in the Electrical Engineering and Computer Science Department (EECS) and CSAIL principal investigator, received the 2025 Outstanding Research Award from the ACM Special Interest Group on Software Engineering (SIGSOFT). The organization awarded him for his “fundamental contributions in pioneering the new fields of program repair and approximate computing.”

Image
alt="A software program runs on a monitor at an empty desk (Credit: Pixabay)."
CSAIL article

A particular set of probabilistic inference algorithms common in robotics involve Sequential Monte Carlo methods, also known as “particle filtering,” which approximates using repeated random sampling. (“Particle,” in this context, refers to individual samples.) Traditional particle filtering struggles with providing accurate results on complex distributions, giving rise to advanced algorithms such as hybrid particle filtering.

Image
The new compiler, called SySTeC, can optimize computations by automatically taking advantage of both sparsity and symmetry in tensors (Credits: iStock).
CSAIL article

The neural network artificial intelligence models used in applications like medical image processing and speech recognition perform operations on hugely complex data structures that require an enormous amount of computation to process. This is one reason deep-learning models consume so much energy.

Image
Ray and Maria Stata Center exterior
External articles

"The net effect [of DeepSeek] should be to significantly increase the pace of AI development, since the secrets are being let out and the models are now cheaper and easier to train by more people." ~ Associate Professor Phillip Isola

Image
alt="Language models may develop their own understanding of reality as a way to improve their generative abilities, indicating that the models may someday understand language at a deeper level than they do today (Credits: Alex Shipps/MIT CSAIL)."
CSAIL article

Ask a large language model (LLM) like GPT-4 to smell a rain-soaked campsite, and it’ll politely decline. Ask the same system to describe that scent to you, and it’ll wax poetic about “an air thick with anticipation" and “a scent that is both fresh and earthy," despite having neither prior experience with rain nor a nose to help it make such observations.