Say a local concert venue wants to engage its community by giving social media followers an easy way to share and comment on new music from emerging artists. Rather than working within the constraints of existing social platforms, the venue might want to create its own social app with the functionality that would be best for its community. But building a new social app from scratch involves many complicated programming steps, and even if the venue can create a customized app, the organization’s followers may be unwilling to join the new platform because it could mean leaving their connections and data behind.
In part 2 of our two-part series on generative artificial intelligence’s environmental impacts, MIT News explores some of the ways experts are working to reduce the technology’s carbon footprint.
Chatbots like ChatGPT and Claude have experienced a meteoric rise in usage over the past three years because they can help you with a wide range of tasks. Whether you’re writing Shakespearean sonnets, debugging code, or need an answer to an obscure trivia question, artificial intelligence (AI) systems seem to have you covered. The source of this versatility? Billions or even trillions of textual data points across the Internet.
In the months leading up to the 2024 U.S. presidential election, a team of researchers at MIT CSAIL, MIT Sloan, MIT LIDS, set out to answer a question no one had fully explored: how do large language models (LLMs) respond to questions about the election? Over four months, from July through November, the team ran nearly daily queries across 12 state-of-the-art models on more than 12,000 carefully constructed prompts, generating a dataset with over 16 million responses from LLMs, to help answer this question.