Image
Ray and Maria Stata Center
CSAIL article

An estimated 20% of every dollar spent on manufacturing is wasted, totaling up to $8 trillion a year, more than the entire annual budget for the U.S. federal government. While industries like healthcare and finance have been rapidly transformed by digital technologies, manufacturing has relied on traditional processes that lead to costly errors, product delays, and an inefficient use of engineers’ time.

Image
PhD student Faraz Faruqi, lead author of a new paper on the project, says that TactStyle could have far-reaching applications extending from home decor and personal accessories to tactile learning tools (Credits: Mike Grimmett/MIT CSAIL).
CSAIL article

Essential for many industries ranging from Hollywood computer-generated imagery to product design, 3D modeling tools often use text or image prompts to dictate different aspects of visual appearance, like color and form. As much as this makes sense as a first point of contact, these systems are still limited in their realism due to their neglect of something central to the human experience: touch.

Image
The models were trained on a dataset of synthetic images like the ones pictured, with objects such as tea kettles or calculators superimposed on different backgrounds. Researchers trained the model to identify one or more spatial features of an object, including rotation, location, and distance (Credits: Courtesy of the researchers).
CSAIL article

When visual information enters the brain, it travels through two pathways that process different aspects of the input. For decades, scientists have hypothesized that one of these pathways, the ventral visual stream, is responsible for recognizing objects, and that it might have been optimized by evolution to do just that.

Image
Neural network (Credit: Wikimedia Commons).
CSAIL article

How do neural networks work? It’s a question that can confuse novices and experts alike. A team from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) says that understanding these representations, as well as how they inform the ways that neural networks learn from data, is crucial for improving the interpretability, efficiency, and generalizability of deep learning models.