We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here. By continuing to use our site, you accept our use of cookies. Cookie Policy.

Features Partner Sites Information LinkXpress
Sign In
Advertise with Us
GLOBETECH PUBLISHING LLC

Download Mobile App




Artificial Intelligence Helps Radiologists Improve Chest X-Ray Interpretation, Finds New Study

By MedImaging International staff writers
Posted on 05 Jul 2021
Print article
Illustration
Illustration
A new diagnostic accuracy study has shown that radiologists can better interpret chest X-rays when assisted by a comprehensive deep-learning model that had a similar or better accuracy than the radiologists for most findings when compared with high-quality, gold standard assessment techniques.

Chest X-rays are widely used in clinical practice; however, interpretation can be hindered by human error and a lack of experienced thoracic radiologists. Deep learning has the potential to improve the accuracy of chest X-ray interpretation. Therefore, the researchers aimed to assess the accuracy of radiologists with and without the assistance of a deep-learning model.

In the retrospective study, a deep-learning model was trained on 821,681 images (284,649 patients) from five data sets from Australia, Europe, and the US. 2,568 enriched chest X-ray cases from adult patients who had at least one frontal chest X-ray were included in the test dataset; cases were representative of inpatient, outpatient, and emergency settings. 20 radiologists reviewed cases with and without the assistance of the deep-learning model with a three-month washout period. The researchers assessed the change in accuracy of chest X-ray interpretation across 127 clinical findings when the deep-learning model was used as a decision support by calculating area under the receiver operating characteristic curve (AUC) for each radiologist with and without the deep-learning model. The team also compared AUCs for the model alone with those of unassisted radiologists. If the lower bound of the adjusted 95% CI of the difference in AUC between the model and the unassisted radiologists was more than −0·05, the model was considered to be non-inferior for that finding. If the lower bound exceeded 0, the model was considered to be superior.

The researchers found that unassisted radiologists had a macroaveraged AUC of 0·713 (95% CI 0·645–0·785) across the 127 clinical findings, compared with 0·808 (0·763–0·839) when assisted by the model. The deep-learning model statistically significantly improved the classification accuracy of radiologists for 102 (80%) of 127 clinical findings, was statistically non-inferior for 19 (15%) findings, and no findings showed a decrease in accuracy when radiologists used the deep-learning model. Unassisted radiologists had a macroaveraged mean AUC of 0·713 (0·645–0·785) across all findings, compared with 0·957 (0·954–0·959) for the model alone. Model classification alone was significantly more accurate than unassisted radiologists for 117 (94%) of 124 clinical findings predicted by the model and was non-inferior to unassisted radiologists for all other clinical findings. Thus, the study demonstrated the potential of a comprehensive deep-learning model to improve chest X-ray interpretation across a large breadth of clinical practice.


Gold Member
Solid State Kv/Dose Multi-Sensor
AGMS-DM+
PACS Workstation
CHILI Web Viewer
DR Flat Panel Detector
1500L
Oncology Information System
RayCare

Print article
Radcal

Channels

MRI

view channel
Image: PET/MRI can accurately classify prostate cancer patients (Photo courtesy of 123RF)

PET/MRI Improves Diagnostic Accuracy for Prostate Cancer Patients

The Prostate Imaging Reporting and Data System (PI-RADS) is a five-point scale to assess potential prostate cancer in MR images. PI-RADS category 3 which offers an unclear suggestion of clinically significant... Read more

Nuclear Medicine

view channel
Image: The new SPECT/CT technique demonstrated impressive biomarker identification (Journal of Nuclear Medicine: doi.org/10.2967/jnumed.123.267189)

New SPECT/CT Technique Could Change Imaging Practices and Increase Patient Access

The development of lead-212 (212Pb)-PSMA–based targeted alpha therapy (TAT) is garnering significant interest in treating patients with metastatic castration-resistant prostate cancer. The imaging of 212Pb,... Read more

General/Advanced Imaging

view channel
Image: The Tyche machine-learning model could help capture crucial information. (Photo courtesy of 123RF)

New AI Method Captures Uncertainty in Medical Images

In the field of biomedicine, segmentation is the process of annotating pixels from an important structure in medical images, such as organs or cells. Artificial Intelligence (AI) models are utilized to... Read more

Imaging IT

view channel
Image: The new Medical Imaging Suite makes healthcare imaging data more accessible, interoperable and useful (Photo courtesy of Google Cloud)

New Google Cloud Medical Imaging Suite Makes Imaging Healthcare Data More Accessible

Medical imaging is a critical tool used to diagnose patients, and there are billions of medical images scanned globally each year. Imaging data accounts for about 90% of all healthcare data1 and, until... Read more
Copyright © 2000-2024 Globetech Media. All rights reserved.