Latest News & Research

When the FORTRAN programming language debuted in 1957, it transformed how scientists and engineers programmed computers. Complex calculations could suddenly be expressed in concise, math-like notation using arrays — collections of values that make it easier to describe operations on data. That simple idea evolved into today’s “tensors,” which power many of the world’s most advanced AI and scientific computing systems through modern frameworks like NumPy and PyTorch.

What can we learn about human intelligence by studying how machines “think?” Can we better understand ourselves if we better understand the artificial intelligence systems that are becoming a more significant part of our everyday lives?

Coding with large language models (LLMs) holds huge promise, but it also exposes some long-standing flaws in software: code that’s messy, hard to change safely, and often opaque about what’s really happening under the hood. Researchers at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) are charting a more “modular” path ahead.