“There’s a growing concern about machine-generated fake text, and for a good reason,” says CSAIL PhD student Tal Schuster, lead author on a new paper on their findings. “I had an inkling that something was lacking in the current approaches to identifying fake information by detecting auto-generated text — is auto-generated text always fake? Is human-generated text always real?”
MIT researchers have developed a model that recovers valuable data lost from images and video that have been “collapsed” into lower dimensions.
The model could be used to recreate video from motion-blurred images, or from new types of cameras that capture a person’s movement around corners but only as vague one-dimensional lines. While more testing is needed, the researchers think this approach could someday could be used to convert 2D medical images into more informative — but more expensive — 3D body scans, which could benefit medical imaging in poorer nations.
Existing efforts to detect IP hijacks tend to look at specific cases when they’re already in process. But what if we could predict these incidents in advance by tracing things back to the hijackers themselves?
Assessing placental health is difficult because of the limited information that can be gleaned from imaging. Traditional ultrasounds are cheap, portable, and easy to perform, but they can’t always capture enough detail. This has spurred researchers to explore the potential of magnetic resonance imaging (MRI). Even with MRIs, though, the curved surface of the uterus makes images difficult to interpret.