From crafting complex code to revolutionizing the hiring process, generative artificial intelligence is reshaping industries faster than ever before — pushing the boundaries of creativity, productivity, and collaboration across countless domains.
"The net effect [of DeepSeek] should be to significantly increase the pace of AI development, since the secrets are being let out and the models are now cheaper and easier to train by more people." ~ Associate Professor Phillip Isola
In a two-part series, MIT News explores the environmental implications of generative AI. In this article, we look at why this technology is so resource-intensive. A second piece will investigate what experts are doing to reduce genAI’s carbon footprint and other impacts.
MIT professor Stefanie Mueller’s group has spent much of the last decade developing a variety of computing techniques aimed at reimagining how products and systems are designed. Much in the way that platforms like Instagram allow users to modify 2-D photographs with filters, Mueller imagines a world where we can do the same thing for a wide array of physical objects.
Chatbots can wear a lot of proverbial hats: dictionary, therapist, poet, all-knowing friend. The artificial intelligence models that power these systems appear exceptionally skilled and efficient at providing answers, clarifying concepts, and distilling information. But to establish trustworthiness of content generated by such models, how can we really know if a particular statement is factual, a hallucination, or just a plain misunderstanding?