News
Article
Author(s):
Peter A. Humphrey, MD, PhD, presents on the standing of artificial intelligence in prostate cancer pathology.
Interest in artificial intelligence (AI) in prostate cancer pathology has picked up speed since initial attempts of computer-aided diagnosis of the disease more than a decade ago, according to Peter A. Humphrey, MD, PhD. However, before the technology can be introduced into routine clinical practice, investigators must first prove that the progress AI may afford outweighs current challenges.1-3
In a presentation delivered at the 17th Annual Interdisciplinary Prostate Cancer Congress® and Other Genitourinary Malignancies, an event hosted by Physician’s Education Resource (PER®), in New York, New York, Humphrey discussed the potential advantages and limitations of AI in prostate cancer. Humphrey is a professor of pathology at Yale School of Medicine and director of genitourinary pathology at Yale Medicine in New Haven, Connecticut.
One of several investigations that sought to define the capability of AI in prostate cancer diagnosis and grading was the PANDA challenge. Historically, Gleason grading, which is performed by light microscopic interpretation of patterns of prostate cancer growth, has been the most powerful indicator of prognosis. However, it’s inherently subjective, leaving room for error, Humphrey explained.4
To determine whether AI could outperform standard identification methods investigators compiled 12,625 whole slide images of prostate biopsies from 6 sites: 10,616 were used for model development, 393 were used for performance evaluation during the competition phase, and 545 and 1071 were used for internal and external validation, respectively.
During the competition phase, 1290 developers from 65 countries submitted algorithms, 15 of which were selected based on the algorithm’s performance. The results showed that the average agreement of the selected algorithms with uropathologists high at kappa was 0.862 (quadratically weighted κ, 95% CI, 0.840-0.884) in the US and 0.868 (95% CI, 0.835-0.900) in the European Union. Additionally, the sensitivity for cancer detection ranged from 97.7% to 98.6%, with specificities falling between 75.2% and 84.3%. However, a high rate of false positives did occur, Humphrey explained, stating that the main algorithm error that was made was misdiagnosing benign cases as prostatic adenocarcinoma, leading to overdiagnosis.4
Study authors concluded that the AI prostate cancer grading algorithms were comparable to intercontinental and multinational cohorts with pathologist-level performance, warranting further study in prospective clinical trials.
However, in an article published in Current Opinion Urology, authors identified 3 categories of overarching challenges that will continue to plague the development of AI if not properly addressed: conceptual, technical, and ethical. Conceptual concerns centered around defining the functions AI can perform in routine clinical practice and the level of autonomy diagnosis AI tools should have. Technical concerns had to do with whether laboratories can adopt sufficient infrastructure to support AI’s use and whether pathologists can learn to use the technology responsibly. The last and perhaps most significant issue was ethical where study authors questioned when AI-based pathology will prove cost-effective and whether it can reduce diagnostic inequality.5
Beyond diagnosis and grading, AI has started to prove capable of providing personalized therapy through multi-modal deep learning in randomized phase 3 trials whereby models were trained and validated using 5 phase 2 randomized RTOG trials. As part of the analysis, a total of 16,204 digitized needle biopsy slides were used, with clinical data from 5654 patients. At a median follow-up of 11.4 years, investigators showed a prognostic improvement ranging from 9.2% to 14.6% compared with National Comprehensive Cancer Network risk groups and predictive benefit with androgen deprivation therapy in model-positive patients, reducing the risk of distant metastasis vs radiotherapy alone in the NRG/RTOG 9408 validation cohort.6,7
Although the field has a long way to go in establishing proper parameters around AI’s use, with guidelines not yet available in pathology, Humphrey highlighted the recent statement from radiology societies in the US, Canada, Europe, Australia, and New Zealand published in the Journal of Medical Imaging Radiation Oncology on developing, purchasing, implementing, and monitoring AI tools in radiology. The statement, in addition to addressing the issues surrounding the use of AI in radiology, are meant to provide practical considerations for AI’s use in radiology in recognition that the technology will affect health care one way or another.8
In conclusion, Humphrey summarized his hope for the field, citing authors of the PANDA challenge: “We foresee a future where pathologists can be assisted by algorithms such as these in the form of a digital colleague.”4