Publication
Article
Oncology Live®
Author(s):
It is difficult to browse a major medical journal these days and not find an article, commentary, or editorial that discusses the objectively rather profound implications for clinical science and health care delivery resulting from simply stunning advances in computer technology in the arena of artificial intelligence.
It is difficult to browse a major medical journal these days and not find an article, commentary, or editorial that discusses the objectively rather profound implications for clinical science and health care delivery resulting from simply stunning advances in computer technology in the arena of artificial intelligence (AI).
These discussions range from the somewhat frivolous, including how an AI program scores on a medical licensing examination, to algorisms that might be employed to improve interpretation of radiographic images or pathology material, to the potential for impressively written but somewhat inaccurate or completely erroneous (and possibly harmful) information widely disseminated on various social media platforms. It is important to note that such misinformation may be inadvertent, perhaps resulting from use of a less well-developed AI-program, or quite deliberate with the overt intent to create societal dysfunction.1
While it is likely that both patients and physicians will
be attracted to the potential ability for AI to quickly provide answers to specific and often complex disease- and manage- ment-related questions, appropriate concerns have been raised in the medical literature for privacy considerations, the current lack of regulatory or quality oversight, and the potential for liability risks.2-5
Science-related organizations, from academic/peer- reviewed journals to funding agencies, have had to rather quickly develop policies for whether AI-assisted publications,6-8 grant writing, and peer review will be permitted, and it is likely that there will be more developments in these arenas as the quality of AI-products improves and it becomes more difficult to distinguish human from nonhuman productions.
Turning to the domain of clinical medicine, concern has been raised (supported by reports in the peer-reviewed literature) that when clinicians employ, or rely upon, AI decision support, their non–AI-assisted diagnostic skills may suffer, especially if great care is not taken to appre- ciate the limitations of the data available to establish the decision-support algorisms.9
Finally, it should be noted that other even more worrisome concerns have been raised about AI strategies within the scientific and broader societal domains, including the chilling suggestion that such highly sophisticated tools could be employed to create bioweapons.10
One of the more concerning aspects of the accelerating complexity of the AI products is the observation by some that even their developers admit they do not fully understand the inner workings and self-evolving potential of what they have created.
In a recently published, highly provocative book, “The Age of AI and our Human Future,” the authors note the following:11
"At the same time, a network platform’s AI follows a logic that is nonhuman and, in many ways, inscrutable to humans. For example, in practice, when an AI-enabled network platform is assessing an image, social media post, or search query, humans may not understand precisely
how the AI operates in that particular situation. While Google’s engineers know that their AI-enabled search func- tion produced clearer results than it would have without AI, they could not always explain why one particular result was ranked higher than another. To a large extent, AI is judged by the utility of its results, not the process used to reach those results. This signals a shift in priorities from earlier eras, when each stem in a mental or mechanical process was either experienced by a human being (a thought, a conver- sation, an administrative process) or could be paused, inspected, and repeated by human beings.”
Yet even when openly acknowledging the appropriate concerns and perhaps frightening unknowns highlighted above, there is clearly legitimate and highly meaningful potential for AI to assist clinicians by providing critical support in multiple decision-making processes within clinical medicine in general and oncology in particular.
The words in the preceding sentence have been carefully selected. There is no intent here to suggest a commercial AI product will ever be responsible for making a pathological diagnosis of cancer or independently determine the final report of a radiographic study. Rather, a legitimate goal will be to effectively assist responsible clinicians (eg, pathologist, radiologist, infectious disease specialist, intensivist, etc) as they strive to deliver efficient, medically optimal, and error-free care.
Clinical medicine is an art as much as it is a science, and the subjective nature of symptoms and individual psychological responses to illness can be as important in diagnostic and treatment decisions as objective findings; thus, AI may complement the role but, in the opinion of this commentator, almost certainly will never replace the well-trained and experienced clinician.12
However, despite the efforts of even the most conscientious clinicians, errors in diagnosis are made with the potential for serious harm, as noted in a recent commentary regarding misdiagnoses in patients being evaluated in emergency room visits.13 One can envision the use of AI to assist clinicians in triaging, diagnosing, and managing the multiple potential scenarios, from the most serious to mundane, seen in this busy health care environment.
While many examples for the practical use of AI, particularly within the realm of anatomic pathology14 and radiographic screening,15 can be high- lighted within oncology, a highly relevant future challenge for AI will be to help reduce the recognized risk for the misdiagnosis of cancer.16