Image
autonomous robots
CSAIL article

The winning team of researchers is a Dream Team of experts from Massachusetts Institute of Technology and the Boston University. It includes experts in neuroscience, robotics, computer science, computer vision, artificial intelligence, mathematical systems theory, and a host of other related advanced technology domains.

Category
Graphic & Vision
Language
Python

This is a PyTorch implementation of semantic segmentation models on MIT ADE20K scene parsing dataset.

ADE20K is the largest open source dataset for semantic segmentation and scene parsing, released by MIT Computer Vision team. Follow the link below to find the repository for our dataset and implementations on Caffe and Torch7: https://github.com/CSAILVision/sceneparsing

Last Updated
Image
Teaching machines to see 3D
MIT news article

From a single image, humans are able to perceive the full 3D shape of an object by exploiting learned shape priors from everyday life. Contemporary single-image 3D reconstruction algorithms aim to solve this task in a similar fashion, but often end up with priors that are highly biased by training classes.