In the months leading up to the 2024 U.S. presidential election, a team of researchers at MIT CSAIL, MIT Sloan, MIT LIDS, set out to answer a question no one had fully explored: how do large language models (LLMs) respond to questions about the election? Over four months, from July through November, the team ran nearly daily queries across 12 state-of-the-art models on more than 12,000 carefully constructed prompts, generating a dataset with over 16 million responses from LLMs, to help answer this question.
Annotating regions of interest in medical images, a process known as segmentation, is often one of the first steps clinical researchers take when running a new study involving biomedical images.
A global cohort of eight scientists and engineers working in a variety of disciplines were named Schmidt Polymaths and will each receive up to $2.5 million over five years to pursue research in new disciplines or using new methodologies, Schmidt Sciences announced today.
The artificial intelligence models that turn text into images are also useful for generating new materials. Over the last few years, generative materials models from companies like Google, Microsoft, and Meta have drawn on their training data to help researchers design tens of millions of new materials.
In 1968, MIT Professor Stephen Benton transformed holography by making three-dimensional images viewable under white light. Over fifty years later, holography’s legacy is inspiring new directions at MIT CSAIL, where the Human-Computer Interaction Engineering (HCIE) group, led by Professor Stefanie Mueller, is pioneering programmable color — a future in which light and material appearance can be dynamically controlled.
When OpenAI introduced ChatGPT to the world in 2022, it brought generative artificial intelligence into the mainstream and started a snowball effect that led to its rapid integration into industry, scientific research, health care, and the everyday lives of people who use the technology.
Customer data is a valuable asset for businesses, but its use presents a complex privacy challenge. Companies aim to predict customer churn, yet this process is increasingly restricted by privacy regulations such as GDPR and growing consumer concerns about data protection.
When researchers are building large language models (LLMs), they aim to maximize performance under a particular computational and financial budget. Since training a model can amount to millions of dollars, developers need to be judicious with cost-impacting decisions about, for instance, the model architecture, optimizers, and training datasets before committing to a model. To anticipate the quality and accuracy of a large model’s predictions, practitioners often turn to scaling laws: using smaller, cheaper models to try to approximate the performance of a much larger target model. The challenge, however, is that there are thousands of ways to create a scaling law.
For pregnant women, ultrasounds are an informative (and sometimes necessary) procedure. They typically produce two-dimensional black-and-white scans of fetuses that can reveal key insights, including biological sex, approximate size, and abnormalities like heart issues or cleft lip. If your doctor wants a closer look, they may use magnetic resonance imaging (MRI), which uses magnetic fields to capture images that can be combined to create a 3D view of the fetus.
Whether you’re an artist, advertising specialist, or just looking to spruce up your home, turning everyday objects into dynamic displays is a great way to make them more visually engaging. For example, you could turn a kids’ book into a handheld cartoon of sorts, making the reading experience more immersive and memorable for a child.