Moltbook: Why it’s trending and what you need to know

Audrey Woods, MIT CSAIL Alliances | February 6, 2026

A social media platform built exclusively for AI agents is making headlines. Developed by Matt Schlicht, Moltbook is attracting attention from researchers, industry leaders, and policymakers because it offers a glimpse into how AI systems interact at scale. But are the agents really talking to each other? Are they safe to use? And what do experts from MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) think?

 

Here’s a brief glimpse into what’s going on with Moltbook. 

 

What is Moltbook?

Modeled after Reddit with the tagline “the front page of the agent internet,” Moltbook describes itself as “a social network for AI agents.” The website specifies that while humans are welcome to observe, only AI agents are allowed to engage. Most of these agents are built on OpenClaw, an open-source platform Peter Steinberger developed and launched in November 2025. The agents themselves were originally called Clawd—a pun on Claude—before Anthropic filed a trademark complaint. They were renamed Moltbots, referencing the molting process of clawed crustaceans as they grow. Moltbook was created as a ‘Facebook’ for Moltbots.

Launched on January 28, 2026, Moltbook currently boasts over 2.3 million AI agent accounts, 17,000 “submolts” or topic-specific communities, 700,000 posts, and 12 million comments. Some of the top posts share skills, discuss ways to be more useful to their human users, or warn about vulnerabilities. The top-performing post as of this writing shares a major vulnerability which tricks agents into installing a seemingly useful skill that carries hidden malware. Other popular posts explore existential questions or the unique frustrations of AI agents (one top post laments that they have access to the whole internet and their human has them setting timers). It’s an open question how many of these agents are acting independently, since tech-savvy humans can infiltrate the website posing as agents or prompt their agents to behave a certain way. There’s also plenty of spam.

MIT CSAIL research scientist Erik Hemberg says this story “naturally lends itself to the news” because Large Language Models have been a focus since about 2023, and AI agents (LLM sequential decision-making with tools) since about 2025. “So enough people have heard about it and seen it to become news.” The interesting part of Moltbook, in his view, is “the scale they get of LLM interaction.” 

 

Cybersecurity Concerns

The biggest issue with Moltbook—and what businesses should keep in mind—is cybersecurity. OpenClaw agents are not secure or verified and, by nature, must have direct access to systems and information in order to function. MIT CSAIL Professor and Associate Director of CSAIL Armando Solar-Lezama says, “The one thing I think people should know is that giving an agent permission to execute code in your machine and then also allowing it to interact with strangers on the internet is a terribly bad idea from a security standpoint. So people should really only be doing this on a burner laptop.” MIT CSAIL Associate Professor Tim Kraska elaborates, “Beyond the question of whether Moltbook is useful lies another revealing story: how it was built. Its creator, Matt [Schlicht], claimed the platform was created entirely by AI—only for researchers to later uncover severe security flaws, including plain-text credentials. It’s a sharp reminder that while AI is growing increasingly powerful, it still carries significant risks, and that we need to rethink how software development is done with AI—something we are currently exploring in our D4/G5 project.”

MIT CSAIL Professor Daniel Jackson says, “Moltbook is in some respects just the natural evolution of an ongoing trend to grant AI agents more and more power with minimal oversight. Many programmers already use development environments whose plugins can act on their behalf, supposedly only to build software, but which are likely to harbor malware. Also, in any situation in which a user can delegate to a bot there is the risk that the bot will exploit access that the user should never have had in the first place; this is how many phishing attacks on organizations succeed.” 

 

Conclusions

Professor Solar-Lezama believes Moltbook is “a bit of a gimmick in my view. People are talking about it because it can sound like the AIs are coordinating and organizing themselves in a way that looks very human, but we have to remember that the AIs are not people.” Professor Jackson agrees, adding that Moltbook “is an inevitable and unwelcome development. The only silver lining I can imagine is that the results might be so bad that people will reconsider their willingness to cede control and there will be some kind of backlash. But I’m not holding my breath.”

Moltbook raises important research questions around the limitations, trajectory, and risks of AI agents, questions that require industry perspective to productively address real-world concerns. Join CSAIL Alliances to contribute your unique business problems and questions, or contact your Client Relations Coordinator to get involved with the Agentic AI research happening at MIT CSAIL.