MIT researchers developed a machine learning technique that learns to represent data in a way that captures concepts which are shared between visual and audio modalities. Their model can identify where certain action is taking place in a video and label it.
Scientists have created a design and fabrication tool for soft pneumatic actuators for integrated sensing, which can power personalized health care, smart homes, and gaming.
A new neural network approach captures the characteristics of a physical system’s dynamic motion from video, regardless of rendering configuration or image differences.
MIT researchers have developed a system that enables a robot to learn a new pick-and-place task based on only a handful of human examples. This could allow a human to reprogram a robot to grasp never-before-seen objects, presented in random poses, in about 15 minutes.