Although the idea of utilizing computers to interpret images is not new, the MIT-led group is drawing on an underused resource—the vast body of radiology reports that accompany medical images, written by radiologists in routine clinical practice—in order to improve the interpretive abilities of machine learning algorithms.
Due to fragmented interfaces and tedious data entry procedures of Electronic Health Records, physicians often spend more time navigating these systems than they do interacting with patients. Researchers at MIT and the Beth Israel Deaconess Medical Center are combining machine learning and human-computer interaction to create a better system.
The more efficient the algorithm, the less work the computer has to do. Behind the scenes a second trend is happening: algorithms are being improved, so in turn less computing power is needed.
The existential threat of COVID-19 has highlighted an acute need to develop working therapeutics against emerging health threats. One of the luxuries deep learning has afforded us is the ability to modify the landscape as it unfolds -- so long as we can keep up with the viral threat, and access the right data.