Image
The models were trained on a dataset of synthetic images like the ones pictured, with objects such as tea kettles or calculators superimposed on different backgrounds. Researchers trained the model to identify one or more spatial features of an object, including rotation, location, and distance (Credits: Courtesy of the researchers).
CSAIL article

When visual information enters the brain, it travels through two pathways that process different aspects of the input. For decades, scientists have hypothesized that one of these pathways, the ventral visual stream, is responsible for recognizing objects, and that it might have been optimized by evolution to do just that.

Image
Neural network (Credit: Wikimedia Commons).
CSAIL article

How do neural networks work? It’s a question that can confuse novices and experts alike. A team from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) says that understanding these representations, as well as how they inform the ways that neural networks learn from data, is crucial for improving the interpretability, efficiency, and generalizability of deep learning models.

Image
“I have such a soft spot for OpenCourseWare — it shaped my career,” says Ana Trišović, a research scientist at MIT CSAIL’s FutureTech lab (Credits: Courtesy of Ana Trišović).
CSAIL article

As a college student in Serbia with a passion for math and physics, Ana Trišović found herself drawn to computer science and its practical, problem-solving approaches. It was then that she discovered MIT OpenCourseWare, part of MIT Open Learning, and decided to study a course on Data Analytics with Python in 2012 — something her school didn’t offer.