Home Artificial Intelligence MIT researchers mix deep learning and physics to repair motion-corrupted MRI scans

MIT researchers mix deep learning and physics to repair motion-corrupted MRI scans

MIT researchers mix deep learning and physics to repair motion-corrupted MRI scans

In comparison with other imaging modalities like X-rays or CT scans, MRI scans provide high-quality soft tissue contrast. Unfortunately, MRI is very sensitive to motion, with even the smallest of movements leading to image artifacts. These artifacts put patients liable to misdiagnoses or inappropriate treatment when critical details are obscured from the physician. But researchers at MIT could have developed a deep learning model able to motion correction in brain MRI.

“Motion is a standard problem in MRI,” explains Nalini Singh, an Abdul Latif Jameel Clinic for Machine Learning in Health (Jameel Clinic)-affiliated PhD student within the Harvard-MIT Program in Health Sciences and Technology (HST) and lead creator of the paper. “It’s a reasonably slow imaging modality.”

MRI sessions can take anywhere from a couple of minutes to an hour, depending on the style of images required. Even throughout the shortest scans, small movements can have dramatic effects on the resulting image. Unlike camera imaging, where motion typically manifests as a localized blur, motion in MRI often leads to artifacts that may corrupt the entire image. Patients could also be anesthetized or requested to limit deep respiration to be able to minimize motion. Nevertheless, these measures often can’t be taken in populations particularly vulnerable to motion, including children and patients with psychiatric disorders. 

The paper, titled “Data Consistent Deep Rigid MRI Motion Correction,” was recently awarded best oral presentation on the Medical Imaging with Deep Learning conference (MIDL) in Nashville, Tennessee. The strategy computationally constructs a motion-free image from motion-corrupted data without changing anything in regards to the scanning procedure. “Our aim was to mix physics-based modeling and deep learning to get the most effective of each worlds,” Singh says.

The importance of this combined approach lies inside ensuring consistency between the image output and the actual measurements of what’s being depicted, otherwise the model creates “hallucinations” — images that appear realistic, but are physically and spatially inaccurate, potentially worsening outcomes in relation to diagnoses.

Procuring an MRI freed from motion artifacts, particularly from patients with neurological disorders that cause involuntary movement, comparable to Alzheimer’s or Parkinson’s disease, would profit greater than just patient outcomes. A study from the University of Washington Department of Radiology estimated that motion affects 15 percent of brain MRIs. Motion in every type of MRI that results in repeated scans or imaging sessions to acquire images with sufficient quality for diagnosis leads to roughly $115,000 in hospital expenditures per scanner on an annual basis.

In response to Singh, future work could explore more sophisticated sorts of head motion in addition to motion in other body parts. As an illustration, fetal MRI suffers from rapid, unpredictable motion that can’t be modeled only by easy translations and rotations. 

“This line of labor from Singh and company is the following step in MRI motion correction. Not only is it excellent research work, but I imagine these methods shall be utilized in every kind of clinical cases: children and older folks who cannot sit still within the scanner, pathologies which induce motion, studies of moving tissue, even healthy patients will move within the magnet,” says Daniel Moyer, an assistant professor at Vanderbilt University. “In the long run, I feel that it likely shall be standard practice to process images with something directly descended from this research.”

Co-authors of this paper include Nalini Singh, Neel Dey, Malte Hoffmann, Bruce Fischl, Elfar Adalsteinsson, Robert Frost, Adrian Dalca and Polina Golland. This research was supported partly by GE Healthcare and by computational hardware provided by the Massachusetts Life Sciences Center. The research team thanks Steve Cauley for helpful discussions. Additional support was provided by NIH NIBIB, NIA, NIMH, NINDS, the Blueprint for Neuroscience Research, a part of the multi-institutional Human Connectome Project, the BRAIN Initiative Cell Census Network, and a Google PhD Fellowship.


Please enter your comment!
Please enter your name here