Image
alt="A software program runs on a monitor at an empty desk (Credit: Pixabay)."
CSAIL article

A particular set of probabilistic inference algorithms common in robotics involve Sequential Monte Carlo methods, also known as “particle filtering,” which approximates using repeated random sampling. (“Particle,” in this context, refers to individual samples.) Traditional particle filtering struggles with providing accurate results on complex distributions, giving rise to advanced algorithms such as hybrid particle filtering.

Image
The new compiler, called SySTeC, can optimize computations by automatically taking advantage of both sparsity and symmetry in tensors (Credits: iStock).
CSAIL article

The neural network artificial intelligence models used in applications like medical image processing and speech recognition perform operations on hugely complex data structures that require an enormous amount of computation to process. This is one reason deep-learning models consume so much energy.

Image
Ray and Maria Stata Center exterior
External articles

"The net effect [of DeepSeek] should be to significantly increase the pace of AI development, since the secrets are being let out and the models are now cheaper and easier to train by more people." ~ Associate Professor Phillip Isola

Image
alt="Language models may develop their own understanding of reality as a way to improve their generative abilities, indicating that the models may someday understand language at a deeper level than they do today (Credits: Alex Shipps/MIT CSAIL)."
CSAIL article

Ask a large language model (LLM) like GPT-4 to smell a rain-soaked campsite, and it’ll politely decline. Ask the same system to describe that scent to you, and it’ll wax poetic about “an air thick with anticipation" and “a scent that is both fresh and earthy," despite having neither prior experience with rain nor a nose to help it make such observations. 

Image
Three new frameworks from MIT CSAIL reveal how natural language can provide important context for language models that perform coding, AI planning, and robotics tasks (Credit: Alex Shipps/MIT CSAIL, with components from the researchers and Pixabay).
CSAIL article

Large language models (LLMs) are becoming increasingly useful for programming and robotics tasks, but for more complicated reasoning problems, the gap between these systems and humans looms large. Without the ability to learn new concepts like humans do, these systems fail to form good abstractions — essentially, high-level representations of complex concepts that skip less-important details — and thus sputter when asked to do more sophisticated tasks.

Image
To close the gap with classical computers, researchers created the quantum control machine — an instruction set for a quantum computer that works like the classical idea of a virtual machine (Credits: Alex Shipps/MIT CSAIL).
CSAIL article

When MIT professor and now Computer Science and Artificial Intelligence Laboratory (CSAIL) member Peter Shor first demonstrated the potential of quantum computers to solve problems faster than classical ones, he inspired scientists to imagine countless possibilities for the emerging technology. Thirty years later, though, the quantum edge remains a peak not yet reached.