A computation has two main constraints: the amount of memory a computation requires and how long it takes to do that calculation. If a task requires a certain number of steps, at worst the computer will need to access its memory for each one, meaning it'll require the same number of memory slots.
While early language models could only process text, contemporary large language models now perform highly diverse tasks on different types of data. For instance, LLMs can understand many languages, generate computer code, solve math problems, or answer questions about images and audio.
This week the National Academy of Engineering (NAE) elected Tomás Lozano-Pérez, MIT School of Engineering Professor in Teaching Excellence and CSAIL principal investigator, as a member for his work in robot motion planning and molecular design.