Image
The “PhysicsGen” system can multiply a few dozen VR demonstrations into nearly 3,000 simulations per machine for mechanical companions like robotic arms and hands (Credit: Alex Shipps/MIT CSAIL using photos from the researchers).
CSAIL article

When ChatGPT or Gemini gives what seems to be an expert response to your burning questions, you may not realize how much information it relies on to give that reply. Like other popular artificial intelligence (AI) models, these chatbots rely on backbone systems called foundation models that train on billions or even trillions of data points.

Image
A robotic arm learns to understand its own body (Credit: Courtesy of the researchers).
CSAIL article

In an office at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), a soft robotic hand carefully curls its fingers to grasp a small object. The intriguing part isn’t the mechanical design or embedded sensors – in fact, the hand contains none. Instead, the entire system relies on a single camera that watches the robot’s movements and uses that visual data to control it.

Image
Top row, left to right: Matthew Caren, April Qiu Cheng, Arav Karighattam, and Benjamin Lou. Bottom row, left to right: Isabelle Quaye, Albert Qin, Ananthan Sadagopan, and Gianfranco (Franco) Yee (Credits: Photos courtesy of the Hertz Foundation).
CSAIL article

The Hertz Foundation announced that it has awarded fellowships to eight MIT affiliates. The prestigious award provides each recipient with five years of doctoral-level research funding (up to a total of $250,000), which gives them an unusual measure of independence in their graduate work to pursue groundbreaking research.

Image
PhD student Faraz Faruqi, lead author of a new paper on the project, says that TactStyle could have far-reaching applications extending from home decor and personal accessories to tactile learning tools (Credits: Mike Grimmett/MIT CSAIL).
CSAIL article

Essential for many industries ranging from Hollywood computer-generated imagery to product design, 3D modeling tools often use text or image prompts to dictate different aspects of visual appearance, like color and form. As much as this makes sense as a first point of contact, these systems are still limited in their realism due to their neglect of something central to the human experience: touch.