Written by: Matthew Busekroos | Produced by: Nate Caldwell

Originally from Houston, Texas, Jonathan Zong completed his undergraduate degree at Princeton University where he studied computer science and visual arts. Zong said he was excited to continue his education at MIT to be close to the action. He said this is where he could think about how technology is designed and put into practice in society, in a place where this is happening all the time.

Zong now works alongside Professor Arvind Satyanarayan and CSAIL’s Visualization Group.

“The Visualization Group has been a fantastic home for me at MIT,” Zong said. “We have a very special culture that values interdisciplinary thinking, approaches design with a healthy balance of theory and practice, and attracts people who are highly supportive and collaborative. Working with Arvind, I’ve been able to develop a way of working that’s not only about designing useful systems, but also using those systems to shed light on complex social questions.”

Zong said his current work is about creating interactive tools for multi-sensory data analysis, focusing on making data more accessible to blind and low-vision users. He added that he is interested in how visualization, textual descriptions, and sonification can work together to provide richer understanding of data.

“Deriving insights from data to inform decision-making is important both for professional data analysis and for everyday people,” Zong said. “From public health, to elections, to your favorite sports team, having access data is important to being part of ongoing conversations.”

Zong said his research is about making sure that people with visual disabilities (seven million in the U.S. and more globally) have equitable access to information.

“It’s also about recognizing that different ways of conveying information are best for different contexts,” he said. “Having more textual descriptions, for example, can also help augment sighted readers' understanding of a visualization.”

One of Zong’s current projects, Olli, is an open source JavaScript project for converting visualizations on the web into accessible text representations. This project is based on research Zong and his collaborators did using keyboard-navigable, structured text to provide descriptions of data at varying levels of detail. Using the arrow keys, screen reader users could explore around the structure and get information about the data read out as text to speech.

“We are building from this work to explore new questions about how users can customize descriptions to find the information they’re looking for more efficiently,” he said. “We’re also thinking about how recent advances in large language models (LLMs) can be used to provide additional contextual information about data, and how to verify the correctness of LLM output.”

Zong said he gets excited when designing software that can both be a way to make things that are useful and serve the public, and also make progress on important social questions.

“Research is about striving for understanding, but that can go hand in hand with making people’s lives better today,” he said.

Following his time at MIT, Zong said he hopes to continue this kind of research wherever that may be whether it’s in industry or academia.

For more on Jonathan Zong, you can check out his website.