Written by Matthew Busekroos | Produced by Nate Caldwell
CSAIL PhD candidate Nouran Soliman studies under the supervision of Professor David Karger in the lab’s Haystack Group. Soliman’s research lies in the intersection of human-computer interaction, social computing, and AI where she focuses on safety, trust, and credibility in online communities, particularly in social media platforms.
Originally from Egypt, Soliman studied at Arab Academy for Science, Technology and Maritime Transport prior to her time at MIT and CSAIL. Soliman said she was interested in furthering her research in HCI upon the completion of her undergraduate program.
“HCI is very intriguing to me as it marries the two domains I am passionate about: computer science and human behavior,” Soliman said. “Re-imaging tools and spaces to positively serve people is my strongest passion.”
Soliman said CSAIL’s Haystack Group blends approaches from human-computer interaction, social computing, AI, web technologies and more, focusing on enhancing how people manage and share their information. She added the group works on a wide range of applications such as misinformation, content moderation and harassment, decentralizing social media, safety and trust, social barriers around academic communities, collective action, online discourse, democratizing programming tools, health care, education and more.
“Working with David Karger has been very rewarding,” she said. “David is a very good advisor that gives his students great flexibility to explore. My ability to reason about ideas, brainstorm solutions and defend my own ideas has significantly grown as I basically practice this every week with lots of interesting discussions about my work with David and with the lab.”
One of the questions Soliman thinks about a lot is how to facilitate inclusive public discourse while mitigating risks such as harassment and misinformation particularly among early-career individuals and marginalized groups.
“I design, construct and test novel computing systems incorporating my design ideas, then I conduct experiments and field studies on these systems to evaluate my ideas,” she said. “I have been investigating key issues around online interactions such as identity disclosure and content moderation within social spaces.”
In her recent work, Soliman introduces a novel design paradigm for identity disclosure in online spaces named meronymity. This approach incorporates selective revealing of verified identity aspects about yourself or your connections or both depending on the context of the conversation.
She said one of the major challenges in online spaces, especially ones involving significant discussions like in the HCI community on X and Mastodon, is finding the middle ground between complete anonymity, which can lead to issues like increased harassment and lack of motivation to interact with anonymous content, and full disclosure, which can heighten social anxiety and deter participation, particularly from more junior members. She added this imbalance often leads to a predominance of senior voices, overshadowing the valuable contributions of juniors and diminishing the potential for rich, diverse dialogues.
“The meronymity model addresses these concerns by providing a framework that supports credibility and encourages engagement without compromising safety,” she said. “For example, one feature of this model is the concept of an "Endorser"—a reputable figure within the community whose association can serve as an identity signal, encouraging more meaningful exchanges. This mechanism is analogous to an advisor facilitating introductions within their network like “this is my student, could you help them with this?” and thereby fostering a supportive environment for knowledge exchange. The model has various stakeholders and levels of information disclosure among each of them in order to achieve this balance.”
Soliman is currently thinking about the application of meronymity to a more generalized conversation, which raises more complex issues around identity verification and content moderation.
“If we empower users with the autonomy to conceal their identities while sharing potentially controversial opinions, then there is an urgent need for effective moderation to shield those who might be adversely affected,” she said. “Moderation practices vary widely today. On one hand, we have paternalistic models where control is centralized within the platform itself, as seen on sites like Facebook and X. Alternatively, some platforms adopt a federated model, like Mastodon, where the community takes on the moderation role, guided by established norms and rules. Then there's a hybrid approach, like Reddit, which combines centralized oversight with community-driven moderation. Although these models are operational, they're not without their challenges.”
Soliman’s work on meronymity recently received an award for Best Paper at CHI 2024.
She said the debate between freedom of expression and safety is ongoing. Centralized models tend to prioritize engagement-driven algorithms, which don't always align with user well-being, while community-led moderation demands significant effort and coordination. A common hurdle across all these approaches is the subjective nature of safety and offense.
Soliman argues that creating safer yet open online environments, while granting users greater agency over their content and consumption, can positively impact online communities, content generation and consumption, mental well-being, and overall user satisfaction.
“I am currently exploring a trust-based human moderation solution, where content propagates based on some aspects of your trust network, she said. “I am also interested in leveraging AI to improve personalized moderation. There are lots of intriguing questions to explore: What mechanisms are necessary to ensure responsible AI implementation and mitigate unintended harms? How can users define "safe" AI alignment and limitations? What governance structure is required to address concerns surrounding human-AI alignment, transparency, bias mitigation, and societal impacts?”
Soliman said she is fascinated by the applications of her research because it touches every human on a daily basis impacting their experiences and shaping the future of online interactions.
“And as we embrace the democratization of Generative AI and advances in virtual reality, rethinking social media spaces to ensure safety and wellbeing becomes imperative,” Soliman said.
Following her studies at CSAIL, Soliman is hoping to do more impactful research that improves people’s lives, considering a path in academia as faculty.
For more on Nouran Soliman, check out her personal website: www.nouransoliman.com