External
null

Please join the Annual AI & Quantum Summit, hosted by CSAIL Alliances and the MIT Center for Quantum Engineering (MIT CQE).  This event is in-person at MIT with a virtual option. 

 

On October 23rd, 2025, CSAIL and MIT experts will gather to explore how the field of quantum computing is changing, how AI innovation is molding quantum’s trajectory, and what business leaders should keep in mind as theory becomes reality.

 

Image
"Meschers" can create multi-dimensional versions of objects that break the laws of physics with convoluted geometries, such as buildings you might see in an M.C. Escher illustration (left) and objects that are shaded in impossible ways (center and right) (Credits: Alex Shipps/MIT CSAIL, using assets from Pixabay and the researchers).
CSAIL article

M.C. Escher’s artwork is a gateway into a world of depth-defying optical illusions, featuring “impossible objects” that break the laws of physics with convoluted geometries. What you perceive his illustrations to be depends on your point of view — for example, a person seemingly walking upstairs may be heading down the steps if you tilt your head sideways.

Image
alt="A new study by MIT researchers shows the first method for machine learning with symmetry that is provably efficient in terms of both the amount of computation and data needed (Credits: iStock, MIT News)."
CSAIL article

If you rotate an image of a molecular structure, a human can tell the rotated image is still the same molecule, but a machine-learning model might think it is a new data point. In computer science parlance, the molecule is “symmetric,” meaning the fundamental structure of that molecule remains the same if it undergoes certain transformations, like rotation.

Image
alt="A system capable of generating images normally requires a tokenizer, which compresses and encodes visual data, along with a generator that can combine and arrange these compact representations in order to create novel images. MIT researchers discovered a new method to create, convert, and “inpaint” images without using a generator at all. This image shows how an input image can be gradually modified by optimizing tokens (Credits: Image courtesy of the authors)."
CSAIL article

AI image generation — which relies on neural networks to create new images from a variety of inputs, including text prompts — is projected to become a billion-dollar industry by the end of this decade. Even with today’s technology, if you wanted to make a fanciful picture of, say, a friend planting a flag on Mars or heedlessly flying into a black hole, it could take less than a second. However, before they can perform tasks like that, image generators are commonly trained on massive datasets containing millions of images that are often paired with associated text. Training these generative models can be an arduous chore that takes weeks or months, consuming vast computational resources in the process.

Image
Researchers from MIT CSAIL and EECS evaluated how closely language models could keep track of objects that change position rapidly. They found that they could steer the models toward or away from particular approaches, improving the system’s predictive capabilities (Credits: Image designed by Alex Shipps, using assets from Shutterstock and Pixabay).
CSAIL article

Let’s say you’re reading a story, or playing a game of chess. You may not have noticed, but each step of the way, your mind kept track of how the situation (or “state of the world”) was changing. You can imagine this as a sort of sequence of events list, which we use to update our prediction of what will happen next.

Image
A new paper by MIT CSAIL researchers maps the many software-engineering tasks beyond code generation, identifies bottlenecks, and highlights research directions to overcome them. The goal: to let humans focus on high-level design, while routine work is automated (Credits: Alex Shipps/MIT CSAIL, using assets from Shutterstock and Pixabay).
CSAIL article

Imagine a future where artificial intelligence quietly shoulders the drudgery of software development: refactoring tangled code, migrating legacy systems, and hunting down race conditions, so that human engineers can devote themselves to architecture, design, and the genuinely novel problems still beyond a machine’s reach. Recent advances appear to have nudged that future tantalizingly close, but a new paper by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and several collaborating institutions argues that this potential future reality demands a hard look at present-day challenges. 

Image
The “PhysicsGen” system can multiply a few dozen VR demonstrations into nearly 3,000 simulations per machine for mechanical companions like robotic arms and hands (Credit: Alex Shipps/MIT CSAIL using photos from the researchers).
CSAIL article

When ChatGPT or Gemini gives what seems to be an expert response to your burning questions, you may not realize how much information it relies on to give that reply. Like other popular artificial intelligence (AI) models, these chatbots rely on backbone systems called foundation models that train on billions or even trillions of data points.