Image: The Wision AI algorithm highlights polyps on the monitor, enhancing detection (bottom) (Photo courtesy of Shanghai Wision AI).
Researchers at Shanghai Wision AI Co., Ltd. (Shanghai, China), a developer of computer-aided diagnostic algorithms and systems to improve the accuracy and effectiveness of diagnostic imaging, have announced results of a study validating a novel machine-learning algorithm that improves detection of adenomatous polyps during colonoscopy. The AI algorithm is built on the same network architecture that is used to develop self-driving cars and is designed to enable “self-driving” in colonoscopy procedures.
The Wision AI algorithm was validated on large, prospectively developed datasets collected independently from the training dataset that were several-fold larger than the training dataset. This more rigorous validation approach utilized by Wision AI is meant to increase the performance of the algorithm in real-world clinical settings.
The algorithm was developed using 5,545 images (65.5% containing polyps and 34.5% without polyps) from the colonoscopy reports of 1,290 patients. Experienced endoscopists annotated the presence of polyps in all images used in the development dataset, and the algorithm was then validated on four independent datasets: two sets for image analysis (A and B) and two sets for video analysis (C and D). According to the study’s key findings, validation on dataset A, which included 27,113 images from patients undergoing colonoscopy at the Endoscopy Center of Sichuan Provincial People’s Hospital, found a per-image-sensitivity of 94.4% and a per-image-specificity of 95.9%. The per-image-sensitivity in a subset of 1,280 images with polyps that are typically hard to detect was 91.7%.
Validation on dataset B, based on a public database of 612 colonoscopy images acquired from the Hospital Clinic of Barcelona, found a per-image-sensitivity of 88.2%. The use of this dataset allowed for generalization of the validation data to a broader patient population. Validation on dataset C included a series of colonoscopy videos containing 138 polyps, found a per-image sensitivity of 91.6% among 60,914 frames of video, and a per-polyp sensitivity of 100%. Validation on dataset D, which contained 54 colonoscopy videos without any polyps, found a per-image-specificity of 95.4% among 1,072,483 frames. The total processing time for each image frame was 76.8 milliseconds, including preprocessing and displaying times before and after execution of the deep-learning algorithm. Implementation in a real-time system resulted in a processing rate of 30 frames per second with Nvidia Titan X GPUs.
Based on these findings, the researchers concluded that the automatic polyp-detection system based on deep learning has a high overall performance in both colonoscopy images and real-time videos.
“The results of this study demonstrate the power of our rigorous approach to developing deep-learning algorithms, which utilizes distinct datasets for training and validation and results in high levels of specificity and sensitivity that have the potential to improve diagnostic screening methods that are known to reduce disease risk, improve health outcomes and save lives,” said JingJia Liu, CEO at Wision AI.
Shanghai Wision AI