We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here. By continuing to use our site, you accept our use of cookies. Cookie Policy.

Features Partner Sites Information LinkXpress
Sign In
Advertise with Us
GLOBETECH PUBLISHING LLC

Download Mobile App




AI Diagnostic Tool Performs On Par with Radiologists in Detecting Diseases on Chest X-Rays

By MedImaging International staff writers
Posted on 19 Sep 2022
Print article
Image: New tool overcomes major hurdle in clinical AI design (Photo courtesy of Unsplash)
Image: New tool overcomes major hurdle in clinical AI design (Photo courtesy of Unsplash)

Most artificial intelligence (AI) models require labeled datasets during their “training” so they can learn to correctly identify pathologies. This process is especially burdensome for medical image-interpretation tasks since it involves large-scale annotation by human clinicians, which is often expensive and time-consuming. For instance, to label a chest X-ray dataset, expert radiologists would have to look at hundreds of thousands of X-ray images one by one and explicitly annotate each one with the conditions detected. While more recent AI models have tried to address this labeling bottleneck by learning from unlabeled data in a “pre-training” stage, they eventually require fine-tuning on labeled data to achieve high performance. Now, scientists have developed an AI diagnostic tool that can detect diseases on chest X-rays directly from natural-language descriptions contained in accompanying clinical reports.

The new model named CheXzero that was developed by scientists at Harvard Medical School (Boston, MA, USA) and colleagues at Stanford University (Stanford, CA, USA) is self-supervised, in the sense that it learns more independently, without the need for hand-labeled data before or after training. The step is deemed a major advance in clinical AI design because most current AI models require laborious human annotation of vast reams of data before the labeled data are fed into the model to train it. The model relies solely on chest X-rays and the English-language notes found in accompanying X-ray reports. The model was “trained” on a publicly available dataset containing more than 377,000 chest X-rays and more than 227,000 corresponding clinical notes.

Its performance was then tested on two separate datasets of chest X-rays and corresponding notes collected from two different institutions, one of which was in a different country. This diversity of datasets was meant to ensure that the model performed equally well when exposed to clinical notes that may use different terminology to describe the same finding. Upon testing, the researchers successfully identified pathologies that were not explicitly annotated by human clinicians. It outperformed other self-supervised AI tools and performed with accuracy similar to that of human radiologists. The approach, the researchers said, could eventually be applied to imaging modalities well beyond X-rays, including CT scans, MRIs, and echocardiograms.

“We’re living the early days of the next-generation medical AI models that are able to perform flexible tasks by directly learning from text,” said study lead investigator Pranav Rajpurkar, assistant professor of biomedical informatics in the Blavatnik Institute at HMS. “Up until now, most AI models have relied on manual annotation of huge amounts of data - to the tune of 100,000 images - to achieve a high performance. Our method needs no such disease-specific annotations.”

“With CheXzero, one can simply feed the model a chest X-ray and corresponding radiology report, and it will learn that the image and the text in the report should be considered as similar—in other words, it learns to match chest X-rays with their accompanying report,” Rajpurkar added. “The model is able to eventually learn how concepts in the unstructured text correspond to visual patterns in the image.”

Related Links:
Harvard Medical School 
Stanford University 

Gold Member
Solid State Kv/Dose Multi-Sensor
AGMS-DM+
New
Wireless Handheld Ultrasound System
TE Air
Compact C-Arm with FPD
Arcovis DRF-C R21
Under Table Shield
3 Section Double Pivot Under Table Shield

Print article
Radcal

Channels

MRI

view channel
Image: Exablate Prime features an enhanced user interface and enhancements to optimize productivity (Photo courtesy of Insightec)

Next Generation MR-Guided Focused Ultrasound Ushers In Future of Incisionless Neurosurgery

Essential tremor, often called familial, idiopathic, or benign tremor, leads to uncontrollable shaking that significantly affects a person’s life. When traditional medications do not alleviate symptoms,... Read more

Nuclear Medicine

view channel
Image: The new SPECT/CT technique demonstrated impressive biomarker identification (Journal of Nuclear Medicine: doi.org/10.2967/jnumed.123.267189)

New SPECT/CT Technique Could Change Imaging Practices and Increase Patient Access

The development of lead-212 (212Pb)-PSMA–based targeted alpha therapy (TAT) is garnering significant interest in treating patients with metastatic castration-resistant prostate cancer. The imaging of 212Pb,... Read more

General/Advanced Imaging

view channel
Image: The Tyche machine-learning model could help capture crucial information. (Photo courtesy of 123RF)

New AI Method Captures Uncertainty in Medical Images

In the field of biomedicine, segmentation is the process of annotating pixels from an important structure in medical images, such as organs or cells. Artificial Intelligence (AI) models are utilized to... Read more

Imaging IT

view channel
Image: The new Medical Imaging Suite makes healthcare imaging data more accessible, interoperable and useful (Photo courtesy of Google Cloud)

New Google Cloud Medical Imaging Suite Makes Imaging Healthcare Data More Accessible

Medical imaging is a critical tool used to diagnose patients, and there are billions of medical images scanned globally each year. Imaging data accounts for about 90% of all healthcare data1 and, until... Read more
Copyright © 2000-2024 Globetech Media. All rights reserved.