When OpenAI introduced ChatGPT to the world in 2022, it brought generative artificial intelligence into the mainstream and started a snowball effect that led to its rapid integration into industry, scientific research, health care, and the everyday lives of people who use the technology.
In 1968, MIT Professor Stephen Benton transformed holography by making three-dimensional images viewable under white light. Over fifty years later, holography’s legacy is inspiring new directions at MIT CSAIL, where the Human-Computer Interaction Engineering (HCIE) group, led by Professor Stefanie Mueller, is pioneering programmable color — a future in which light and material appearance can be dynamically controlled.
Customer data is a valuable asset for businesses, but its use presents a complex privacy challenge. Companies aim to predict customer churn, yet this process is increasingly restricted by privacy regulations such as GDPR and growing consumer concerns about data protection.
When researchers are building large language models (LLMs), they aim to maximize performance under a particular computational and financial budget. Since training a model can amount to millions of dollars, developers need to be judicious with cost-impacting decisions about, for instance, the model architecture, optimizers, and training datasets before committing to a model. To anticipate the quality and accuracy of a large model’s predictions, practitioners often turn to scaling laws: using smaller, cheaper models to try to approximate the performance of a much larger target model. The challenge, however, is that there are thousands of ways to create a scaling law.