If you were working as a law clerk in the federal courthouse in Manhattan in the early 1980’s, you might have seen current Microsoft President and Vice Chair Brad Smith edging his way through the doors with a medium-sized, clunky machine, also known as a Personal Computer.
To get ahead of the uncertainty inherent to crashes, scientists from MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL) and the Qatar Center for Artificial Intelligence (QCAI) developed a deep learning model that predicts very high-resolution crash risk maps.
Although the idea of utilizing computers to interpret images is not new, the MIT-led group is drawing on an underused resource—the vast body of radiology reports that accompany medical images, written by radiologists in routine clinical practice—in order to improve the interpretive abilities of machine learning algorithms.
Due to fragmented interfaces and tedious data entry procedures of Electronic Health Records, physicians often spend more time navigating these systems than they do interacting with patients. Researchers at MIT and the Beth Israel Deaconess Medical Center are combining machine learning and human-computer interaction to create a better system.