Our new reality is increasingly one of virtuality — that is, technologies like virtual reality, immersive videogames, social media, and other emerging forms of digital media that use computing to layer imaginative experiences atop our physical world. The pandemic has made virtual engagement more critical, and the advance of convincing synthetic media such as “deepfake” technology has made using AI technologies of virtuality more crucial to discerning between the truth and misinformation. As such, MIT researchers are studying how we can design and use technologies of virtuality to better serve us and help us understand their social and ethical impact.

“When we design our technologies, oftentimes they’re not designed with a broad group of stakeholders in mind,” says Professor D. Fox Harrell of MIT CSAIL, who leads the Imagination, Computation, and Expression Laboratory (ICE Lab) research group in CSAIL and is the Director of the MIT Center for Advanced Virtuality. “For instance, one of the projects we had within CSAIL was looking at how social media and related technologies in gaming may not always be designed with the cultural values and needs of different international settings in mind.”

He and his team used machine learning and other statistical methods to find patterns of use that exist in different global settings, and based on those patterns, worked with a social scientist to better understand the social and cultural meanings of the patterns and how people negotiate the use of those technologies in their regions. They then produced principles for design that support different regions internationally, as well as a new implementation of strategies that can better support these different regions.

Prof. Harrell’s work, which explores the relationship between computing and imagination, falls into both the design and analysis of virtuality technologies. He designs systems such as interactive narrative systems, gaming technologies, immersive media, virtual reality, augmented reality, and other mixed reality systems that have a social impact on critical issues such as learning bias, workplace training, sociability skills, and more. The analysis aspect is then “where we reverse the lens,” he says. He builds analytical tools using AI that can identify trends, biases, and diverse user needs to convey the social reason for these and develop new principles for design that people in industry and academia can apply to their work.

For example, he and his team are using AI analysis to help videogame developers build better narratives and applications to support the needs of their user base. In one instance for a bestselling game, they found that “if you played in any of the three major play roles, your character is not going to be optimized if you play as a female character. So, these are the sorts of things that developers might want to look out for, but may not have automated tools to find these kinds of phenomena.”

He adds that “the other thing that we can do with these kinds of approaches is design experiences that better support diverse user groups.” For instance, he works with museums to make displays and a kiosk and other forms of immersive interaction, and makes them more conversational and accessible by diverse users. “We also want to make the experiences more personalized, and not just personalized in the generic sense, so we’ll find the latest social science model in that domain to do fine-grained personalization.”

He and his team worked with the Universal Hip Hop Museum in New York to try out this idea. “We computationally implemented a social science model of musical identity and created a system where, whether you listen to country and western, bebop jazz, or west coast hip hop, and you're interested in issues of women in representation in hip hop or representation of issues like violence in hip hop, you will get a narrative that has a soundtrack with your musical preferences, as well as the topic that you're interested in.”

Prof. Harrell’s work has also been used in workplace training and learning, K-12 education, higher education, and more, not only to identify values and biases, but to make sure the system has a lasting impact. “We have the technical research that makes it more adaptive and engaging. We also have the scientific approach that makes sure that these systems are going to actually be effective for their aims of helping students from diverse background learn computer science while seeing themselves as powerful STEM learners and doers,” he explains.

In gaming, for example, educational games may flourish and proliferate to the point where we are not strictly distinguishing between commercial entertainment games and learning games. “That is, we can build these kinds of social issues and phenomena into the most immersive and engaging forms of media, in dialogue other forms such as literature and film already do.”

By using AI technologies to reveal embedded values and biases, he says, “Our work within CSAIL can be part of the solution. And that has direct practical benefits for our Alliances members, because these are crucial issues that impact user bases, customer bases, developers, investors, and more.”

He has worked with CSAIL Alliances member Warner Brothers in the space of immersive technologies, for example. “We’ve also engaged with members in areas such as TV and personalized experience through televisual media, and we’ve also engaged with members who are interested in a virtual reality and other extended reality technologies such as mixed reality systems.”