We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here. By continuing to use our site, you accept our use of cookies. Cookie Policy.

Features Partner Sites Information LinkXpress
Sign In
Advertise with Us
GLOBETECH PUBLISHING LLC

Download Mobile App




Self-Learning Algorithms Could Improve AI-Based Evaluation of Medical Imaging Data

By MedImaging International staff writers
Posted on 22 Dec 2020
Print article
Image: nnU-Net handles a broad variety of datasets and target image properties (Photo courtesy of Isensee et al. / Nature Methods)
Image: nnU-Net handles a broad variety of datasets and target image properties (Photo courtesy of Isensee et al. / Nature Methods)
Scientists have presented a new method for configuring self-learning algorithms for a large number of different imaging datasets – without the need for specialist knowledge or very significant computing power.

In the evaluation of medical imaging data, artificial intelligence (AI) promises to provide support to physicians and help relieve their workload, particularly in the field of oncology. Yet regardless of whether the size of a brain tumor needs to be measured in order to plan treatment or the regression of lung metastases needs to be documented during the course of radiotherapy, computers first have to learn how to interpret the three-dimensional imaging datasets from computed tomography (CT) or magnetic resonance imaging (MRI). They must be able to decide which pixels belong to the tumor and which do not.

AI experts refer to the process of distinguishing between the two as 'semantic segmentation'. For each individual task – for example recognizing a renal carcinoma on CT images or breast cancer on MRI images – scientists need to develop special algorithms that can distinguish between tumor and non-tumor tissue and can make predictions. Imaging datasets for which physicians have already labeled tumors, healthy tissue, and other important anatomical structures by hand are used as training material for machine learning. It takes experience and specialized knowledge to develop segmentation algorithms such as these.

Scientists from the German Cancer Research Center (DKFZ; Heidelberg, Germany) have now developed a method that adapts dynamically and completely automatically to any kind of imaging datasets, thus allowing even researchers with limited prior expertise to configure self-learning algorithms for specific tasks. The method, known as nnU-Net, can deal with a broad range of imaging data: in addition to conventional imaging methods such as CT and MRI, it can also process images from electron and fluorescence microscopy. Using nnU-Net, the DKFZ researchers obtained the best results in 33 out of 53 different segmentation tasks in international competitions, despite competing against highly specific algorithms developed by experts for specific individual questions. The team is making nnU-Net available as an open source tool to be downloaded free of charge.

So far, AI-based evaluation of medical imaging data has mainly been applied in research contexts and has not yet been broadly used in the routine clinical care of cancer patients. However, medical informatics specialists and physicians see considerable potential for its use, for example for highly repetitive tasks, such as those that often need to be performed as part of large-scale clinical studies. nnU-Net can help harness this potential, according to the scientists.

"nnU-Net can be used immediately, can be trained using imaging datasets, and can perform special tasks – without requiring any special expertise in computer science or any particularly significant computing power," explained Klaus Maier-Hein.


Related Links:
German Cancer Research Center (DKFZ)

Gold Member
Solid State Kv/Dose Multi-Sensor
AGMS-DM+
Ultrasound Needle Guide
Ultra-Pro II
New
Wireless Handheld Ultrasound System
TE Air
DR Flat Panel Detector
1500L

Print article
Radcal

Channels

MRI

view channel
Image: PET/MRI can accurately classify prostate cancer patients (Photo courtesy of 123RF)

PET/MRI Improves Diagnostic Accuracy for Prostate Cancer Patients

The Prostate Imaging Reporting and Data System (PI-RADS) is a five-point scale to assess potential prostate cancer in MR images. PI-RADS category 3 which offers an unclear suggestion of clinically significant... Read more

Nuclear Medicine

view channel
Image: The new SPECT/CT technique demonstrated impressive biomarker identification (Journal of Nuclear Medicine: doi.org/10.2967/jnumed.123.267189)

New SPECT/CT Technique Could Change Imaging Practices and Increase Patient Access

The development of lead-212 (212Pb)-PSMA–based targeted alpha therapy (TAT) is garnering significant interest in treating patients with metastatic castration-resistant prostate cancer. The imaging of 212Pb,... Read more

General/Advanced Imaging

view channel
Image: The Tyche machine-learning model could help capture crucial information. (Photo courtesy of 123RF)

New AI Method Captures Uncertainty in Medical Images

In the field of biomedicine, segmentation is the process of annotating pixels from an important structure in medical images, such as organs or cells. Artificial Intelligence (AI) models are utilized to... Read more

Imaging IT

view channel
Image: The new Medical Imaging Suite makes healthcare imaging data more accessible, interoperable and useful (Photo courtesy of Google Cloud)

New Google Cloud Medical Imaging Suite Makes Imaging Healthcare Data More Accessible

Medical imaging is a critical tool used to diagnose patients, and there are billions of medical images scanned globally each year. Imaging data accounts for about 90% of all healthcare data1 and, until... Read more
Copyright © 2000-2024 Globetech Media. All rights reserved.