Teaching AI to Think Like a Cyber Adversary with MIT CSAIL Research Scientist Erik Hemberg

Audrey Woods, MIT CSAIL Alliances | May 11, 2026

In a game of chess, predicting your opponent's next move is just as important as knowing your own. That same logic drives Dr. Erik Hemberg, a Research Scientist in the ALFA (AnyScale Learning For All) group at MIT CSAIL. Since 2013, Dr. Hemberg has studied what he calls "adversarial dynamics," or the perpetual, evolving contest between attackers and defenders in complex systems. 

From modeling tax evasion strategies to simulating some of the world's most sophisticated cyber threats, Dr. Hemberg seeks to understand how intelligent adversaries compete and adapt to each other, and how we can use that knowledge to build more resilient defenses. 

 

FROM TAX AVOIDANCE TO BRON 

Early on at CSAIL, Dr. Hemberg and his colleagues built a simulation of a tax avoider evading an auditor while the auditor simultaneously tried to catch the avoider. The idea was that adversaries would never reach an equilibrium but constantly co-evolve to outmaneuver each other. "Just because you changed the tax law does not mean that people will start paying tax. They will find new ways of avoiding paying tax." 

That oscillating, adaptive dynamic proved to be a robust framework for understanding real-world strategic conflict like cybersecurity. Networks, like tax systems, have gaps, and sophisticated attackers are experts at finding and exploiting them. In order to study how threat actors operate and how defenders can counter them, Dr. Hemberg's team first needed a knowledge base of known threat behaviors. This demand for consolidated knowledge led them to create BRON

Derived from the Swedish word for bridge, BRON is a knowledge graph that connects multiple layers of public cybersecurity data that had previously existed in separate silos. Before BRON, a defender who discovered a specific vulnerability in their system would have to manually cross-reference multiple databases to understand which attack techniques could exploit it, which weaknesses enabled it, and which software products were affected. BRON automates those connections, by linking datasets such as MITRE ATT&CK (which catalogs adversary tactics and techniques), Common Vulnerabilities and Exposures, Common Weakness Enumeration, Common Attack Pattern Enumeration and Classification, and Common Platform Enumeration. This allows researchers and defenders to efficiently trace the full anatomy of potential attacks. 

When large language models (LLMs) arrived on the scene, Dr. Hemberg's team realized the BRON database “could be useful to train language models to be better at cybersecurity. You can provide these links that have been observed." LLMs have ingested enormous volumes of publicly available security information, but they often miss critical connections between datasets, connections BRON provides. For example, when an AI agent is asked to assess the risk of a particular system, an LLM augmented with BRON can ground its response in verified, structured relationships, identifying not just that a vulnerability exists, but tracing the specific attack techniques that exploit it, the defensive mitigations that address it, and the software configurations where it is most dangerous. Early experiments by the ALFA group suggest that this grounding significantly improves the accuracy and actionability of AI-generated security assessments. 

This work has brought Dr. Hemberg full circle. "I didn't start saying, 'I'm going to make BRON and then I'm going to make agents.' I started saying, 'I'm going to make agents,' and then realized I needed BRON in order to make good agents." 

 

AGENTS, TRUST, AND THE EVOLVING THREAT LANDSCAPE 

The fundamental hurdle with AI agents, in Dr. Hemberg’s view, is explainability. "I think of agents as consultants. If I have a consultant, I'd like to know what they're doing, not just have them come back with a report." Right now, AI agents work well when their outputs can be independently verified, like coding agents whose code can be run and checked against clear standards. While Dr. Hemberg expects agents to play an increasingly important role in cybersecurity, he argues organizations will need to invest in knowledge infrastructure like BRON to allow agents to operate with appropriate confidence and accountability. 

His team is now working on what he jokingly calls "BRON++," extending and improving the knowledge base to keep pace with a rapidly changing threat landscape. As AI develops, "what you're seeing is basically the speed, volume, and the velocity being increased. You need to change your risk calculation, because things that would be very difficult for someone to do, now you can set off an agent that can test that." Even the most esoteric software bugs could now be at risk, and defenders who once had a comfortable window to patch a vulnerability before it was actively exploited may face a much tighter race. Furthermore, as organizations automate more of their operations, they inadvertently expand their own attack surfaces, creating new vulnerabilities even as they gain new defensive capabilities. 

 

FUTURE WORK: DATA STRUCTURE & AI INTEGRATION 

One question on Dr Hemberg’s mind is whether the underlying data of BRON needs to be rethought. All existing cybersecurity data was written by and for humans, so should it be restructured now that LLMs and AI agents are the primary consumers? Can knowledge graphs be constructed faster using language models, and if so, how can researchers verify that the resulting structures are correct? And, as AI agents are deployed in the market, how can users be sure they are themselves secure and trustworthy? 

Dr. Hemberg thinks the next frontier in computer science will be integrating AI agents into real-world workflows, a shift he’s already noticed in his daily work. "I spend a lot more time reviewing things now than doing. For industry, I'd expect people will have more tasks reviewing than doing." The challenge ahead, he argues, is fitting AI into existing systems with the right levels of trust, oversight, and efficiency, especially as the complexity of interconnected systems introduces scaling challenges. 

As for co-evolution, Dr. Hemberg's career has followed a similar pattern as his research. He set out to study adversarial dynamics, realized he needed structured knowledge to make that work meaningful, and ended up building the foundational infrastructure that is now shaping how AI and humans collaboratively face adversaries together. The circle, as he sees it, keeps turning, and each revolution makes both the knowledge and the agents more complete, robust, and useful in the real world. 

Learn more about Dr. Hemberg on his website or CSAIL page.