Various studies have shown that the heavy metal gadolinium, which is often used in MRIs with contrast, remains in the body long after exposure. In fact, a 2015 study showed that it can stay in the body for up to 14 years. And a 2019 paper published in The Journal of Medical Toxicology loosely linked gadolinium deposits in the body with a variety of adverse symptoms. These revelations have caused an uproar, with countless individuals concerned and no longer wanting to receive MRIs with contrast, which can be essential for certain illnesses and conditions.
There is hope yet: as of last November, a Stanford University-led team has started using AI to lower the amount of gadolinium needed in MRIs, while preserving diagnostic integrity. The team, led by electrical engineering PhD student Enhao Gong, used a cutting-edge artificial intelligence (AI) technique known as deep learning to help reduce the need for gadolinium use in MRI. They trained the AI algorithm using MR images from 200 patients who had been given MRI exams of varying contrasts, and compiled three image sets per patient: pre-contrast (no dose), low-dose (10% of the usual gadolinium dosage), and full-dose (100% of the usual gadolinium dosage).
The algorithm then learned to synthesize the full-dose scans using only the zero and low-dose ones; specialists who examined the AI-synthesized images not only noted that there was no significant difference between the low-dose images enhanced by the algorithm and the MRI scans actually done with full doses of gadolinium, but also that the algorithm bettered the overall image quality. The study’s findings suggest that in the future, AI could drastically lower the amount of gadolinium dosage used in MRI scans without compromising the procedure’s diagnostic abilities.
The team next planned to further look into their methods in a clinical setting, as well as seeing how the algorithm stacks up with other MRI scanners and with different contrast materials.
*Image courtesy of the Radiological Society of North America.