Audrey Woods, MIT CSAIL Alliances | April 16, 2026
David Clark, Senior Research Scientist at MIT CSAIL, helped design the system that connects nearly every computer on earth. As Chief Protocol Architect of the Internet from 1981 to 1989, he was there for the beginnings of the Internet and has facilitated the tools, technology, and experiences we now use almost constantly. Today, at 82, he’s watching a new generation of engineers build something that feels quite familiar. Calling the wave of AI technology an “echo” of what happened in the 80’s, Dr. Clark is cautioning, “ maybe we need to slow things down and think a bit.”
When Dr. Clark and his colleagues conceptualized a platform that would move data between any two computers in the world—the foundation of the modern Internet—he says, “we were not just optimistic about the technology. We were optimistic that if we built this general purpose platform, society would find all kinds of wonderful things to do with it.” Fueled by the desire to erode sovereign boundaries and allow everyone in the world to speak freely with each other, Dr. Clark hoped this technology would usher in a future of incredible advancements and open communication. “The answer was yes, but.” While he acknowledges the Internet empowered astonishing advancements, “society also found all sorts of horrible things to do with it.”
A strength of the Internet itself is that it is under decentralized control. Nobody is in charge. There are 77,000 autonomous systems: entities like Comcast, Verizon, Facebook, Google, MIT, and Harvard. The Domain Name System is decentralized, with lots of registries competing to sell names. And anybody can set up and run their own web server. However, when we look at the applications that define the actual human experience of using the Internet, we see they are controlled by powerful, centralized actors. Think about Meta, or X, or Google. The concerns about the power they wield are well-understood. Millions of people are being enticed to spend hours every day on platforms modeled after slot machines. “We're flooding people with things that have been picked by recommendation systems. Everybody thinks this is pretty pernicious—I do too—and more AI can either make this much worse or much better. What we see today is that AI is making it worse.”
With AI flooding online social spaces, it’s becoming increasingly impossible to tell whether the ‘person’ on the other end of a comment or conversation is human. Bots and agents warp the experience of interaction, forcing users to question every comment, post, or article. This flood of generated false content risks “driving people to a point where they don't believe anything and they give up and they walk away.” And highly individualized content has its own risks. With generative AI, a future political campaign could write a tailored message for every single individual voter “at scale, in bulk, for nothing.” If each of us is hearing a different message, what can we hold in common?
To understand how AI will play out, the important consideration is that AI is going to be used by different actors according to their interests, motivation, and capabilities. Powerful actors are highly capable, and much of the manipulation happening on the Internet is being done by people with high motivation. “ People with the strongest motives are often bad guys. They know what they want. They know what they're trying to do.” Most people do not wake up in the morning preparing to defend themselves from malicious behavior, but attackers know exactly what they are going to do. Even more dangerously, bad actors have figured out how to leverage elements of human nature that don’t play well with the Internet. A human being’s deep and visceral need to belong can make them fall for scams, surrender private information, or get roped into extremist groups. “ Loneliness and exclusion are the most psychologically destructive things you can do to a person,” Clark says. ”The desire to belong can be stronger than the desire to stay alive.”
When designing a system to be used by humans, it may be an oversight to ignore human psychology. That’s why Dr. Clark has been collaborating with a psychologist for the past three years to “try to understand why people do the stupid things they do on the Internet.” He jokes, “psychologists simply know things that computer scientists don't, and [my colleague] is very quick to point out that computer scientists are not the best by breed to understand how people operate.” With AI proliferating more rapidly than even the Internet did, Dr. Clark says, “ I think we should be looking at sociological roots. We should be looking at psychological roots, because we're creating these things that are pseudo people. And the idea that I will have an agent and you will have an agent and you and I never talk to each other, but my agent just talks to your agent? What the f*** is that?”
“The foundation of sociological research is that society is built up out of the interaction of people. The character of a society is determined by the patterns of interaction. Who is controlling our interactions today? By injecting artificial entities into this system in various places, and using AI to both mediate and control with whom (or what) we interact, we are eroding democracy. I think this is very corrosive to society.”
Dr. Clark is careful to clarify that he’s not pessimistic about AI, per se. “I have become realistic about the range of motivations that will cause various people to do what they do. And motivations are not always societally benign.” Fundamentally, he argues “AI is going to be an empowering tool for whoever chooses to use it, and who's going to use it first? People who have a motive.” More importantly, “AI will be used through the lens of whoever has the power to turn it on,” which right now means the tech giants and successful AI startups who are, by nature, “not going to advocate for us. So who’s on our side?”
There are a few reasons to be hopeful. Dr. Clark is glad that the protocols on which AI systems are being built are still mostly open and often being handed over to the Linux Foundation, which advocates for open source and trust. “That’s a great way to signal that you really do mean for the AI ecosystem to be open.” Also, the fact that the market has not become “tippy”—in that there is no clear winner-take-all situation, like what happened with Facebook—means that the standard will likely remain open and competitive. Finally, Dr. Clark is encouraged that the government is already discussing regulations and controls. “ We weren't even having the conversation about the Internet until probably 15 or 20 years after we should have. We're having the conversation much sooner this time.”
Looking back at his career, Dr. Clark says “in the whole, I think we did a very good thing. We just might have thought about a few different things at the edges.” In the early era, for example, we rejected the idea of having an embedded identity mechanism in the IP layer, which would have eliminated anonymous speech and likely enforced sovereign borders online. “We were trying to erode sovereign boundaries.” He defends the thinking behind that, but “I would've thought about it more. I would not have been so unambiguously clear that the goal was to eradicate them, because sovereign states have power, and when they look at a system that does not implement something that they think is important, such as sovereign boundaries, they're going to come in and do it anyway. And they break things in the process.” Dr. Clark also now questions the core premise of full, easy, and equal communication with everyone in the world because, “ part of what we've learned is that when you allow everybody in the world to talk to you, there are a lot of people out there who (a) don't think like you, and (b) are not necessarily benign.” Clark says users can proceed, but with caution.
One potential solution is adding friction to online experiences. Referencing the 2011 popular science book Thinking, Fast and Slow, he talks about there being two systems of human thought: system one for fast, intuitive, instinctual thinking and system two for slow, deliberative reasoning. “Everything [computer scientists] try to do is make things efficient, fast, easy to do. And what that's doing is allowing it to run on system one. There's a group of people who are saying, ‘for actions that are potentially dangerous, we should deliberately slow things down.’” Ironically, one way to do that is with AI. “I could have an AI agent that was watching over my shoulder and saying, ‘stop, slow down, think for a minute. Are you sure you're about to do what you want to do? Are you sure that's what you want to do?’”
Which brings him back to AI. “ I’m a standards guy. The way I come at this is not as an AI enthusiast, but as somebody who watched the evolution of the social experience on the Internet and is trying to say: how is AI going to change that experience for better? For worse? Is it going to exacerbate it? Is it going to improve it?” Dr. Clark hopes, by sharing these thoughts, he might help the potential users of AI think a little more deeply about the decisions they’re making now that will shape the future of this technology. “What my experience with the Internet taught me was that the techno-optimism about how society would use it should have been tempered with an evolved form of realism about how human beings actually work.” In that, he’s hoping for the best. “I’d really like to see a benign outcome to this next phase. That moves me.”
Learn more about Dr. Clark on his website or CSAIL page.