Essential for many industries ranging from Hollywood computer-generated imagery to product design, 3D modeling tools often use text or image prompts to dictate different aspects of visual appearance, like color and form. As much as this makes sense as a first point of contact, these systems are still limited in their realism due to their neglect of something central to the human experience: touch.
When visual information enters the brain, it travels through two pathways that process different aspects of the input. For decades, scientists have hypothesized that one of these pathways, the ventral visual stream, is responsible for recognizing objects, and that it might have been optimized by evolution to do just that.
The process of discovering molecules that have the properties needed to create new medicines and materials is cumbersome and expensive, consuming vast computational resources and months of human labor to narrow down the enormous space of potential candidates.
Think of your most prized belongings. In an increasingly virtual world, wouldn’t it be great to save a copy of that precious item and all the memories it holds?
Due to the inherent ambiguity in medical images like X-rays, radiologists often use words like “may” or “likely” when describing the presence of a certain pathology, such as pneumonia.
How do neural networks work? It’s a question that can confuse novices and experts alike. A team from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) says that understanding these representations, as well as how they inform the ways that neural networks learn from data, is crucial for improving the interpretability, efficiency, and generalizability of deep learning models.
Six current MIT affiliates and 27 additional MIT alumni have been elected as fellows of the American Association for the Advancement of Science (AAAS).
CSAIL Alliances Affiliate Member Sony Interactive Entertainment (SIE) hosted a January 2025 IAP course, “The Nexus of Games and AI.” Designed to “introduce students to game creation, current game-related research, and an exploration of the technology, the art, and the fun of video games,” this course allowed SIE to engage with a broad range of students, meet CSAIL faculty, and deepen their connection to MIT CSAIL.
This week the National Academy of Engineering (NAE) elected Tomás Lozano-Pérez, MIT School of Engineering Professor in Teaching Excellence and CSAIL principal investigator, as a member for his work in robot motion planning and molecular design.
Try taking a picture of each of North America's roughly 11,000 tree species, and you’ll have a mere fraction of the millions of photos within nature image datasets. These massive collections of snapshots — ranging from butterflies to humpback whales — are a great research tool for ecologists because they provide evidence of organisms’ unique behaviors, rare conditions, migration patterns, and responses to pollution and other forms of climate change.