Ryan Sander

WRITTEN BY: Matt Busekroos

Prior to joining CSAIL for his MEng, Ryan Sander completed his B.S. from MIT in Electrical Engineering and Computer Science and Mathematical Economics. Sander’s senior capstone project centered on Multi-Agent Deep Reinforcement Learning for autonomous vehicles through CSAIL’s Distributed Robotics Laboratory and the SuperUROP program. Sander said this research opportunity helped pave a crucial path toward his current research.

Sander recently completed work with MIT’s Distributed Robotics Laboratory (DRL). He worked alongside Professor Daniela Rus and Sertac Karaman, in addition to Professor Igor Gilitschenski, Wilko Schwarting, Ph.D., Tim Seyde, Lucas Liebenwein, Ph.D., and Andrew Heier (MITLL). He said working in the DRL has been a highly formative experience in his educational and professional career.

“In addition to helping me learn significantly more about theoretical and applied machine learning, working in DRL has helped me to better understand each step of the research and development process, and how to conduct meaningful research in the fields of machine learning and robotics,” Sander said. “Finally, Daniela and Sertac have shown me the importance of focusing not only on the minutia of our research projects, but also paying special attention to the important, big-picture questions such as: ‘why does our research matter?’ and ‘how will our research help people?’”

Sander said the focus of the group’s current research hinges on improving the sample efficiency of continuous, off-policy deep reinforcement learning methods via meaningful generation of synthetic samples.

“Specifically, we are developing interpolated experience replay methods that recombine, both through linear and Bayesian mechanisms, an agent’s previous experiences in meaningful and informative ways,” he said. “Leveraging these meaningful combinations of previous experiences in turn allows for more efficiently training off-policy deep reinforcement learning agents, as these agents now require fewer samples to learn optimal behaviors in different environments. To test and validate these interpolated experience replay approaches, we are running continuous control experiments using OpenAI Gym, DeepMind Control Suite, and MuJoCo.”

Sander said sample inefficiency is one property of deep reinforcement learning that still poses a substantial roadblock to fully integrating deep reinforcement learning algorithms with real-world industry technologies. He added that by improving the sample efficiency of these algorithms through model-centric and data-centric research, the group is taking steps to bring real-world deep reinforcement learning to life across a variety of industry domains. Sander said autonomous vehicles, in particular, is one industry domain that he hopes their research is of benefit to, as this technology will decisively save countless lives, time, and energy in the coming years.

“I am fascinated by learning representations, and the idea of optimizing not only machine learning models, but the datasets used to train and tune these models as well,” Sander said. “I believe that by directing our focus to both data-centric and model-centric machine learning methods, we will be able to better replicate and understand the artificial intelligence applications we build. This data-centric interest is largely coupled to my interest in autonomous vehicles, a goal use case of our research.”

Sander recently graduated with his MEng in June. He is now working for the United States Department of Defense as a Lidar Imagery Scientist through the Science, Mathematics, and Research for Transformation program. Sander said he hopes to return to graduate school to earn a Ph.D. in machine learning or robotics. He said his eventual professional goal is to work as a research scientist at the intersection of machine learning, robotics, computer vision, and remote sensing.   You can find more information about Ryan Sander’s work below.