We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here. By continuing to use our site, you accept our use of cookies. Cookie Policy.

Features Partner Sites Information LinkXpress hp
Sign In
Advertise with Us
GLOBETECH PUBLISHING LLC

Download Mobile App




Deep Learning Improves Lung Ultrasound Interpretation

By MedImaging International staff writers
Posted on 31 Jan 2024
Image: Workflow diagram showing real-time lung ultrasound segmentation with U-Net (Photo courtesy of Ultrasonics)
Image: Workflow diagram showing real-time lung ultrasound segmentation with U-Net (Photo courtesy of Ultrasonics)

Lung ultrasound (LUS) has become a valuable tool for lung health assessment due to its safety and cost-effectiveness. Yet, the challenge in interpreting LUS images, largely due to its dependence on artefacts, leads to variability among operators and hampers its wider application. Now, a new study has found that deep learning can enhance the real-time interpretation of lung ultrasound. This study found that a deep learning model trained on lung ultrasound images was capable of segmenting and identifying artefacts in these images, as demonstrated in tests on a phantom model.

In the study, researchers at the University of Leeds (West Yorkshire, UK) employed a deep learning technique for multi-class segmentation in ultrasound images of a lung training phantom. This technique was used to distinguish various objects and artefacts, such as ribs, pleural lines, A-lines, B-lines, and B-line confluences. The team developed a modified version of the U-Net architecture for image segmentation, aiming to strike a balance between the model’s speed and accuracy. During the training phase, they implemented an ultrasound-specific augmentation pipeline to enhance the model’s ability to generalize new, unseen data such as geometric transformations and ultrasound-specific augmentations. The trained network was then applied to segment live image feeds from a cart-based point-of-care ultrasound (POCUS) system, using a convex curved-array transducer to image the training phantom and stream frames. The model, trained on a single graphics processing unit, required about 12 minutes for training with 450 ultrasound images.

The model demonstrated a high accuracy rate of 95.7%, with moderate-to-high Dice similarity coefficient scores. Real-time application of the model at up to 33.4 frames per second significantly enhanced the visualization of lung ultrasound images. Furthermore, the team evaluated the pixel-wise correlation between manually labeled and model-predicted segmentation masks. Through a normalized confusion matrix, they noted that the model accurately predicted 86.8% of pixels labeled as ribs, 85.4% for the pleural line, and 72.2% for B-line confluence. However, it correctly predicted only 57.7% of A-line and 57.9% of B-line pixels.

Additionally, the researchers employed transfer learning with their model, using knowledge from one dataset to improve training on a related dataset. This approach yielded Dice similarity coefficients of 0.48 for simple pleural effusion, 0.32 for lung consolidation, and 0.25 for the pleural line. The findings suggest that this model could aid in lung ultrasound training and help bridge skill gaps. The researchers have also proposed a semi-quantitative measure, the B-line Artifact Score, which estimates the percentage of an intercostal space occupied by B-lines. This measure could potentially be linked to the severity of lung conditions.

“Future work should consider the translation of these methods to clinical data, considering transfer learning as a viable method to build models which can assist in the interpretation of lung ultrasound and reduce inter-operator variability associated with this subjective imaging technique,” the researchers stated.

Related Links:
University of Leeds

Digital Color Doppler Ultrasound System
MS22Plus
X-ray Diagnostic System
FDX Visionary-A
Digital X-Ray Detector Panel
Acuity G4
Diagnostic Ultrasound System
DC-80A

Channels

Nuclear Medicine

view channel
Image: LHSCRI scientist Dr. Glenn Bauman stands in front of the PET scanner (Photo courtesy of LHSCRI)

New Imaging Solution Improves Survival for Patients with Recurring Prostate Cancer

Detecting recurrent prostate cancer remains one of the most difficult challenges in oncology, as standard imaging methods such as bone scans and CT scans often fail to accurately locate small or early-stage tumors.... Read more

General/Advanced Imaging

view channel
Image: Concept of the photo-thermoresponsive SCNPs (J F Thümmler et al., Commun Chem (2025). DOI: 10.1038/s42004-025-01518-x)

New Ultrasmall, Light-Sensitive Nanoparticles Could Serve as Contrast Agents

Medical imaging technologies face ongoing challenges in capturing accurate, detailed views of internal processes, especially in conditions like cancer, where tracking disease development and treatment... Read more

Imaging IT

view channel
Image: The new Medical Imaging Suite makes healthcare imaging data more accessible, interoperable and useful (Photo courtesy of Google Cloud)

New Google Cloud Medical Imaging Suite Makes Imaging Healthcare Data More Accessible

Medical imaging is a critical tool used to diagnose patients, and there are billions of medical images scanned globally each year. Imaging data accounts for about 90% of all healthcare data1 and, until... Read more
Copyright © 2000-2025 Globetech Media. All rights reserved.