A particular set of probabilistic inference algorithms common in robotics involve Sequential Monte Carlo methods, also known as “particle filtering,” which approximates using repeated random sampling. (“Particle,” in this context, refers to individual samples.) Traditional particle filtering struggles with providing accurate results on complex distributions, giving rise to advanced algorithms such as hybrid particle filtering.
The neural network artificial intelligence models used in applications like medical image processing and speech recognition perform operations on hugely complex data structures that require an enormous amount of computation to process. This is one reason deep-learning models consume so much energy.
"The net effect [of DeepSeek] should be to significantly increase the pace of AI development, since the secrets are being let out and the models are now cheaper and easier to train by more people." ~ Associate Professor Phillip Isola
CSAIL principal investigator Charles Leiserson - a professor in MIT’s department of Electrical Engineering and Computer Science (EECS) - recently co-authored a paper published by the Computing Research Association as part of the first edition of their “Quadrennial Papers,” published every four years.
Are you a CSAIL entrepreneur? Are you curious about the resources that CSAIL Alliances, as well as the rest of the MIT Ecosystem can offer you? Sign up for Office Hours using the form to ask Christiana Kalfas, Sr.
Ask a large language model (LLM) like GPT-4 to smell a rain-soaked campsite, and it’ll politely decline. Ask the same system to describe that scent to you, and it’ll wax poetic about “an air thick with anticipation" and “a scent that is both fresh and earthy," despite having neither prior experience with rain nor a nose to help it make such observations.
A new tool makes it easier for database users to perform complicated statistical analyses of tabular data without the need to know what is going on behind the scenes.
Large language models like those that power ChatGPT have shown impressive performance on tasks like drafting legal briefs, analyzing the sentiment of customer reviews, or translating documents into different languages.
Large language models (LLMs) are becoming increasingly useful for programming and robotics tasks, but for more complicated reasoning problems, the gap between these systems and humans looms large. Without the ability to learn new concepts like humans do, these systems fail to form good abstractions — essentially, high-level representations of complex concepts that skip less-important details — and thus sputter when asked to do more sophisticated tasks.
When MIT professor and now Computer Science and Artificial Intelligence Laboratory (CSAIL) member Peter Shor first demonstrated the potential of quantum computers to solve problems faster than classical ones, he inspired scientists to imagine countless possibilities for the emerging technology. Thirty years later, though, the quantum edge remains a peak not yet reached.