Image
Ray and Maria Stata Center exterior
External articles

"The net effect [of DeepSeek] should be to significantly increase the pace of AI development, since the secrets are being let out and the models are now cheaper and easier to train by more people." ~ Associate Professor Phillip Isola

Image
MIT professor and CSAIL Director Daniela Rus.
CSAIL article

Daniela Rus, a distinguished computer scientist and professor at the Massachusetts Institute of Technology (MIT), has been honored with induction into the prestigious Académie Nationale de Médecine (ANM) as a foreign member on January 7, 2025. As the Director of MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL), Daniela leads over 1,700 researchers in pioneering innovations to advance computing and improve global well-being.

Image
In a recent commentary, a team from MIT, Equality AI, and Boston University highlights the gaps in regulation for AI models and non-AI algorithms in health care (Credit: Adobe Stock).
CSAIL article

One might argue that one of the primary duties of a physician is to constantly evaluate and re-evaluate the odds: What are the chances of a medical procedure’s success? Is the patient at risk of developing severe symptoms? When should the patient return for more testing? Amidst these critical deliberations, the rise of artificial intelligence promises to reduce risk in clinical settings and help physicians prioritize the care of high-risk patients.

Image
 EECS faculty and CSAIL principal investigators Sara Beery, Marzyeh Ghassemi, and Yoon Kim (Credit: MIT EECS).
CSAIL article

Sara Beery, Marzyeh Ghassemi, and Yoon Kim, EECS faculty and CSAIL principal investigators, were awarded AI2050 Early Career Fellowships earlier this week for their pursuit of “bold and ambitious work on hard problems in AI.” They received this honor from Schmidt Futures, Eric and Wendy Schmidt’s philanthropic initiative that aims to accelerate scientific innovation.

Image
When users query a model, ContextCite highlights the specific sources from the external context that the AI relied upon for that answer. If the AI generates an inaccurate fact, for example, users can trace the error back to its source and understand the model’s reasoning (Credit: Alex Shipps/MIT CSAIL).
CSAIL article

Chatbots can wear a lot of proverbial hats: dictionary, therapist, poet, all-knowing friend. The artificial intelligence models that power these systems appear exceptionally skilled and efficient at providing answers, clarifying concepts, and distilling information. But to establish trustworthiness of content generated by such models, how can we really know if a particular statement is factual, a hallucination, or just a plain misunderstanding?