Home > Oncology > Radiologists outperform current AI systems for screening-mammography image analysis

Radiologists outperform current AI systems for screening-mammography image analysis

Journal
The BMJ
Reuters Health - 08/09/2021 - Radiologists perform better than current artificial intelligence (AI) systems for detecting breast cancer on screening mammography, according to results of a systematic review of the literature commissioned by the U.K. National Screening Committee.

"Current evidence on the use of AI systems in breast cancer screening is a long way from having the quality and quantity required for its implementation into clinical practice," the reviewers, from the University of Warwick in Coventry, report in The BMJ.

Some prior research has suggested that AI systems outperform radiologists in reading mammograms and might soon replace radiologists. Yet, a recent "scoping" review of 23 studies on AI for the early detection of breast cancer highlighted "evidence gaps and methodological concerns about published studies," Dr. Sian Taylor-Phillips and colleagues note in their paper.

They reviewed 12 recent studies reporting test accuracy of AI algorithms, alone or alongside radiologists, to detect cancer in digital screening mammograms. Collectively, the studies involved more than 131,000 women in Sweden, the United States, Germany, the Netherlands and Spain.

Overall, the quality of the methods used in the studies was "poor," the researchers report.

Three large studies compared decisions made by AI systems versus radiologists in 79,910 women, including 1,878 with screen detected cancer or interval cancer within 12 months of screening.

Thirty-four (94%) of the 36 AI systems tested in these three studies were "less accurate than a single radiologist, and all were less accurate than consensus of two or more radiologists," the study team reports.

Five smaller studies evaluated five AI systems in 1,086 women with 520 cancers. All of the AI systems were more accurate than a single radiologist, but these studies were at "high risk of bias" and their "promising results" have not been replicated in larger studies, the team cautions.

The researchers acknowledge that AI algorithms are constantly improving and "reported assessments of AI systems might be out of date by the time of study publication."

They say "well designed comparative test accuracy studies, randomized controlled trials, and cohort studies in large screening populations are needed which evaluate commercially available AI systems in combination with radiologists."

SOURCE: https://bit.ly/3jQ67qG The BMJ, online September 1, 2021.

By Reuters Staff



Posted on