Regina Barzilay, School of Engineering Distinguished Professor for AI and Health at MIT, CSAIL Principal Investigator, and Jameel Clinic AI Faculty Lead, has been awarded the 2025 Frances E. Allen Medal from the Institute of Electrical and Electronics Engineers (IEEE). Barzilay’s award recognizes the impact of her machine-learning algorithms on medicine and natural language processing.
Daniela Rus, Director of CSAIL and MIT EECS Professor, was recently named a co-recipient of the 2024 John Scott Award by the Board of Directors of City Trusts. This prestigious honor, steeped in historical significance, celebrates scientific innovation at the very location where American independence was signed in Philadelphia, a testament to the enduring connection between scientific progress and human potential.
When Nikola Tesla predicted we’d have handheld phones that could display videos, photographs, and more, his musings seemed like a distant dream. Nearly 100 years later, smartphones are like an extra appendage for many of us.
Research scientist Yosuke Tanigawa and Professor Manolis Kellis at MIT Computer Science and Artificial Intelligence Laboratory (CSAIL) have developed a novel methodology in human genetics to address an often-overlooked problem: how to handle clinical measurements that fall "below the limit of quantification" (BLQ). Recently published in the American Journal of Human Genetics, their new approach, "hypometric genetics," utilizes these typically discarded measurements to enhance genetic discovery, with significant implications for personalized genomic medicine and drug development.
When you think about hands-free devices, you might picture Alexa and other voice-activated in-home assistants, Bluetooth earpieces, or asking Siri to make a phone call in your car. You might not imagine using your mouth to communicate with other devices like a computer or a phone remotely.
Despite their impressive capabilities, large language models are far from perfect. These artificial intelligence models sometimes “hallucinate” by generating incorrect or unsupported information in response to a query.
AI systems are increasingly being deployed in safety-critical health care situations. Yet these models sometimes hallucinate incorrect information, make biased predictions, or fail for unexpected reasons, which could have serious consequences for patients and clinicians.
Ever been asked a question you only knew part of the answer to? To give a more informed response, your best move would be to phone a friend with more knowledge on the subject.
To the untrained eye, a medical image like an MRI or X-ray appears to be a murky collection of black-and-white blobs. It can be a struggle to decipher where one structure (like a tumor) ends and another begins.