WRITTEN BY: Matt Busekroos | VIDEO BY: Nate Caldwell

Born and raised in Shiraz, Iran, Farnaz Jahanbakhsh later moved to Tehran for her undergraduate studies before coming to the United States. Prior to MIT, Jahanbakhsh completed her B.Sc in Computer Engineering at Sharif University of Technology in Tehran and M.Sc in CS at the University of Illinois at Urbana-Champaign (UIUC). 

At CSAIL, Jahanbakhsh is working toward completing her PhD. She works with Professor David Karger and the Haystack Group.  

“One of the things I really appreciate about working with David is that one can talk to him about the details of implementation or the high-level ideas and impacts and in both cases, he would carefully listen, ask clarifying questions, and offer useful insights,” Jahanbakhsh said. “This makes his students feel he is a collaborator with whom one can share problems and brainstorm solutions, rather than a manager to whom one needs to report results.” 

Jahanbakhsh’s research centers on helping people navigate their information space with a focus on equipping them with tools they can use to better differentiate between credible and inaccurate content. She said her approach is not to tell them what is true versus what is false, but instead help them come to the realization on their own. 

“I think the approaches that aim at telling users what to believe, e.g., true versus fake news ML classifiers, have several shortcomings, the most important of which is why should users trust that approach, or the people who have come up with that approach, or the data from which the approach draws,” Jahanbakhsh said.  

Jahanbakhsh said she and her group designed some interventions that could be employed by social media platforms as a test when users are about to share a piece of content. These interventions asked users about the accuracy of the item they were about to share and explain their rationale for why they believe the content is accurate or inaccurate. They found that both interventions work in reducing the sharing of misinformation. 

She said they have also been working on building a news reading and sharing platform called Trustnet that incorporates some design changes aimed at helping people distinguish between true and false content.  

According to Jahanbakhsh, the platform allows users to assess posts as part of the data model, to specify what other users they trust, and to filter their newsfeed based on accuracy assessments on posts provided by their trusted sources. The group has been conducting user studies on the platform to understand how people use these features and whether they see value in them. 

“Nevertheless, categorically false content isn’t the only thing that can be wrong with the everyday information that news consumers see on the internet,” Jahanbakhsh said. “There also exists a problem with how news media edit their headlines to attract attention and traffic, resulting in headlines that are clickbait-y, misleading, or otherwise not an accurate representation of what is in the article.” 

To combat this problem, Jahanbakhsh and the group have designed and developed a browser extension that allows users to edit news headlines for the benefit of others. Then the other users of the extension will see the edited headlines wherever they encounter the article, whether it is on Facebook, Twitter or other platforms. 

“We live in a world where COVID continues to wreak excess havoc because of false claims about the efficacy of vaccines or masks,” Jahanbakhsh said. “A great deal of these false claims is propagated on online spaces. If I succeed at reducing the misinformation that is shared online by each individual, or to help them decide what information they can trust, I’ll rejoice that I’ve made a direct positive impact on people’s lives.” 

Jahanbakhsh said even an intervention in this area that slightly reduces the sharing of falsehoods by each individual can go a long way. She added the reduction in sharing of misinformation will be exponential when considering a chain of users will each refrain from sharing a fraction of the falsehoods that they encounter in their feeds.  

“I believe the results of my work additionally have a direct impact on social media platforms that serve as channels for delivering and sometimes amplifying falsehoods to the users,” Jahanbakhsh said. 

Jahanbakhsh said she loves that her research can have a tangible and significant impact on the public. In order to achieve that, she said she has to work closely with users to understand their needs and practices consuming information and design user studies. That will allow Jahanbakhsh the ability to have people use her tools and observe what works for them and what does not. 

Following her time at CSAIL, Jahanbakhsh is unsure yet whether she wants to pursue a research career in academia or industry. She said an advantage of joining a research lab in industry is that she will get to implement and deploy design changes that can help curb the sharing of misinformation on real world platforms with their large user bases. On the other hand, she said, academia offers freedom of research and opportunity for mentorship that she finds attractive.