Filter Options
Date
Image
alt="A system capable of generating images normally requires a tokenizer, which compresses and encodes visual data, along with a generator that can combine and arrange these compact representations in order to create novel images. MIT researchers discovered a new method to create, convert, and “inpaint” images without using a generator at all. This image shows how an input image can be gradually modified by optimizing tokens (Credits: Image courtesy of the authors)."
CSAIL article

AI image generation — which relies on neural networks to create new images from a variety of inputs, including text prompts — is projected to become a billion-dollar industry by the end of this decade. Even with today’s technology, if you wanted to make a fanciful picture of, say, a friend planting a flag on Mars or heedlessly flying into a black hole, it could take less than a second. However, before they can perform tasks like that, image generators are commonly trained on massive datasets containing millions of images that are often paired with associated text. Training these generative models can be an arduous chore that takes weeks or months, consuming vast computational resources in the process.

Image
Researchers from MIT CSAIL and EECS evaluated how closely language models could keep track of objects that change position rapidly. They found that they could steer the models toward or away from particular approaches, improving the system’s predictive capabilities (Credits: Image designed by Alex Shipps, using assets from Shutterstock and Pixabay).
CSAIL article

Let’s say you’re reading a story, or playing a game of chess. You may not have noticed, but each step of the way, your mind kept track of how the situation (or “state of the world”) was changing. You can imagine this as a sort of sequence of events list, which we use to update our prediction of what will happen next.

Image
A new paper by MIT CSAIL researchers maps the many software-engineering tasks beyond code generation, identifies bottlenecks, and highlights research directions to overcome them. The goal: to let humans focus on high-level design, while routine work is automated (Credits: Alex Shipps/MIT CSAIL, using assets from Shutterstock and Pixabay).
CSAIL article

Imagine a future where artificial intelligence quietly shoulders the drudgery of software development: refactoring tangled code, migrating legacy systems, and hunting down race conditions, so that human engineers can devote themselves to architecture, design, and the genuinely novel problems still beyond a machine’s reach. Recent advances appear to have nudged that future tantalizingly close, but a new paper by researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and several collaborating institutions argues that this potential future reality demands a hard look at present-day challenges.