On May 22-23, 2024, the MIT Technology Review hosted its annual conference on AI breakthroughs and technological progress, EmTech Digital. This year, the event was focused on the power of generative AI, how burgeoning AI technology will impact the workforce, what else is on the horizon, and what business leaders, policymakers, and everyday users should keep in mind.
Here are some key takeaways from this year’s EmTech Event.
- In her talk, Deputy Dean of Academics in the Schwarzman College of Computing Professor Azu Ozdaglar laid out how the future of AI can either enhance human health, comfort, productivity, purpose, and meaning or sideline humans through excessive automation, manipulation, and discouraging experimentation and innovation.
- As AI is rolled out into the market, it is important to consider safety and regulation from the beginning rather than be forced to respond as these technologies are implemented. AI should be held to the same standards as humans at a minimum, but AI might need further regulation as its capacities in some domains are greater than those of humans.
- Guardrails, explainability, and auditability will be increasingly important going forward, and future lawmakers are likely to create binding legal regulations that ensure this safety. The speakers in this area said that because AI technology is moving so fast, regulations and laws will likely have to adapt and grow in real time, but that should not stop policymakers from acting now, even imperfectly.
- It’s still an open question of how responsibility and liability will be spread across the AI stack, especially when it comes to issues like copyright infringement. Some companies such as Adobe are now offering user guarantees, absorbing the legal risk. Speakers also argued that because AI learns in a similar way to human beings (ingesting data for an accurate and useful output) and because AI progress is important for national security and competitiveness, the law is likely to fall on the side of AI when it comes to whether or not tactics like web scraping are copyright infringement.
- Regulatory capture, siloing, and centralization are a major concern, with most of the speakers arguing that decentralized and democratized access would bring about the best possible future for humanity.
- AI can lead to huge productivity gains and worker efficiency, but the process of upskilling and bringing existing workers into new practices is one that should be approached with caution to ensure maximum buy-in, adoption, and empowerment.
- There was a general consensus that AI will not significantly replace workers. Rather, it will augment workers to improve productivity, efficiency, and—hopefully—satisfaction. The world of AI technology will also likely bring with it a wave of new jobs and opportunities, much like other historical innovations.
A theme of the event is that we’re entering an age where things will be changing so quickly that continuing education, upskilling, and technological disruption are likely to become the norm even more so than they are already. Those who succeed in the emerging economy will be flexible, quick to adapt to new tools, and will prioritize innovation and growth over efficiency and short-term profit.
- Generative AI is already proving transformative in areas such as healthcare, customer service, coding, drug discovery, weather forecasting, recruitment, and more.
- Many speakers agreed that companies not currently incorporating some element of generative AI into their practices are likely to fall behind. Principal Research Scientist at the MIT Sloan School of Management Andrew McAfee likened these opportunities as buried pots of treasure where AI, specifically generative AI, can be rapidly applied for immediate and significant gains.
- Currently, generative AI has major issues around trust, bias, and reliability, especially when it comes to safety-critical environments such as healthcare and driving.
- The ability of generative AI to produce deep fakes and other forms of misinformation is a growing problem that will be difficult to address. Some solutions currently being explored are watermarks, authenticating technology, and digital fingerprints.
- Several speakers urged caution against anthropomorphizing generative AI, which, despite speaking, learning, and sometimes behaving like a human, is not conscious, moral, and cannot make value judgements.
- Major companies such as Amazon, eBay, Adobe, Meta, Google, and OpenAI showcased new generative AI applications and services, with an emphasis on safety, trust, and human empowerment.
The consensus of the event was that, while generative AI is an extremely exciting technology, it is not a panacea and will not be an all-or-nothing cure for technological needs. Like other tools, generative AI has specific strengths and areas where it can offer business utility and personal improvement. Therefore, it’s important to carefully consider implementation roadmaps that take into account best use cases, trust, safety, potential future regulations, and management of workforce adoption.
- In the future, AI’s ability for personalization will become more granular and useful for both companies and consumers. Some examples include:
- Generative interfaces offering apps that match the user’s train of thought in real time.
- Ads and campaigns using micro-influencers to reach specific populations.
- Programs that are tuned into a user’s preferences with such fidelity that they can predict, for example, what movie a given individual would enjoy next.
- Interfaces that can match workers with the perfect job for their skillset, background, and temperament.
- Associate Professor in the MIT Media Lab Ramesh Raskar painted a picture of decentralized AI, where the AI has access to cross-enterprise data—including “dimensional data” (or information about the real world) and a user’s digital twin—to provide what he called a “God’s Eye View,” where generative agents can offer real-time and hyper-personalized assistance.
- CSAIL research affiliate and CEO of Liquid AI Ramin Hasani presented Liquid Networks, which evolved out of the lab of CSAIL Director Professor Daniela Rus. Based on the fully mapped out brains of tiny worms, liquid networks offer a promising new way to design AI models that are explainable, scalable, and modeled on biological intelligence.
CSAIL Professor Polina Golland showcased how neural networks can be trained with medical images—such as chest X-rays—along with physician labels in multi-modal learning to produce models that can help identify and classify challenging medical images. She says, “This work paves the way toward translating some of our ideas toward improved clinical support.” However, explainability in medicine will be of utmost importance, especially when the system makes mistakes, and we’re a long way from AI making decisions in healthcare environments.
Despite the broad range of AI topics discussed at EmTech 2024, one constant was the excitement and optimism for a future of widespread AI technology. While there are risks and drawbacks to keep in mind, AI offers the potential to improve not just productivity but also health outcomes, leisure options, employment satisfaction, work opportunities, and more.
To get involved in AI technology or learn more about the research happening at MIT CSAIL, reach out to CSAIL Alliances at alliances@list.csail.mit.edu.

