We use cookies to understand how you use our site and to improve your experience. This includes personalizing content and advertising. To learn more, click here. By continuing to use our site, you accept our use of cookies. Cookie Policy.

Features Partner Sites Information LinkXpress
Sign In
Advertise with Us

Download Mobile App


ATTENTION: Due to the COVID-19 PANDEMIC, many events are being rescheduled for a later date, converted into virtual venues, or altogether cancelled. Please check with the event organizer or website prior to planning for any forthcoming event.

Self-Learning Algorithms Could Improve AI-Based Evaluation of Medical Imaging Data

By MedImaging International staff writers
Posted on 22 Dec 2020
Print article
Image: nnU-Net handles a broad variety of datasets and target image properties (Photo courtesy of Isensee et al. / Nature Methods)
Image: nnU-Net handles a broad variety of datasets and target image properties (Photo courtesy of Isensee et al. / Nature Methods)
Scientists have presented a new method for configuring self-learning algorithms for a large number of different imaging datasets – without the need for specialist knowledge or very significant computing power.

In the evaluation of medical imaging data, artificial intelligence (AI) promises to provide support to physicians and help relieve their workload, particularly in the field of oncology. Yet regardless of whether the size of a brain tumor needs to be measured in order to plan treatment or the regression of lung metastases needs to be documented during the course of radiotherapy, computers first have to learn how to interpret the three-dimensional imaging datasets from computed tomography (CT) or magnetic resonance imaging (MRI). They must be able to decide which pixels belong to the tumor and which do not.

AI experts refer to the process of distinguishing between the two as 'semantic segmentation'. For each individual task – for example recognizing a renal carcinoma on CT images or breast cancer on MRI images – scientists need to develop special algorithms that can distinguish between tumor and non-tumor tissue and can make predictions. Imaging datasets for which physicians have already labeled tumors, healthy tissue, and other important anatomical structures by hand are used as training material for machine learning. It takes experience and specialized knowledge to develop segmentation algorithms such as these.

Scientists from the German Cancer Research Center (DKFZ; Heidelberg, Germany) have now developed a method that adapts dynamically and completely automatically to any kind of imaging datasets, thus allowing even researchers with limited prior expertise to configure self-learning algorithms for specific tasks. The method, known as nnU-Net, can deal with a broad range of imaging data: in addition to conventional imaging methods such as CT and MRI, it can also process images from electron and fluorescence microscopy. Using nnU-Net, the DKFZ researchers obtained the best results in 33 out of 53 different segmentation tasks in international competitions, despite competing against highly specific algorithms developed by experts for specific individual questions. The team is making nnU-Net available as an open source tool to be downloaded free of charge.

So far, AI-based evaluation of medical imaging data has mainly been applied in research contexts and has not yet been broadly used in the routine clinical care of cancer patients. However, medical informatics specialists and physicians see considerable potential for its use, for example for highly repetitive tasks, such as those that often need to be performed as part of large-scale clinical studies. nnU-Net can help harness this potential, according to the scientists.

"nnU-Net can be used immediately, can be trained using imaging datasets, and can perform special tasks – without requiring any special expertise in computer science or any particularly significant computing power," explained Klaus Maier-Hein.

Related Links:
German Cancer Research Center (DKFZ)

Print article


Copyright © 2000-2021 Globetech Media. All rights reserved.