Press Release

Article

Oncology Researchers Raise Ethics Concerns Posed By Patient-Facing Artificial Intelligence

In a new paper in JCO Oncology Practice, bioethics researchers at Dana-Farber Cancer Institute call on medical societies, government leaders, clinicians, and researchers to work together to ensure AI-driven healthcare preserves patient autonomy and respects human dignity.

Amar Kelkar, MD

Amar Kelkar, MD

BOSTON – Ready or not, patients with cancer are increasingly likely to find themselves interacting with artificial intelligence technologies to schedule appointments, monitor their health, learn about their disease and its treatment, find support, and more. In a new paper in JCO Oncology Practice, bioethics researchers at Dana-Farber Cancer Institute call on medical societies, government leaders, clinicians, and researchers to work together to ensure AI-driven healthcare preserves patient autonomy and respects human dignity.

The authors note that while AI has immense potential for expanding access to cancer care and improving the ability to detect, diagnose, and treat cancer, medical professionals and technology developers need to act now to prevent the technology from depersonalizing patient care and eroding relationships between patients and caregivers. While previous papers on AI in medicine have focused on its implications for oncology clinicians and AI researchers, the new paper is one of the first to address concerns about AI embedded in technology used by patients with cancer.

"To date, there has been little formal consideration of the impact of patient interactions with AI programs that haven't been vetted by clinicians or regulatory organizations," says the paper's lead author, Amar Kelkar, MD, a stem cell transplantation physician at Dana-Farber Cancer Institute. "We wanted to explore the ethical challenges of patient-facing AI in cancer, with a particular concern for its potential implications for human dignity."

As oncology clinicians and researchers have begun to harness AI – to help diagnose cancer and track tumor growth, predict treatment outcomes, or find patterns of occurrence – direct interface between patients and the technology has so far been relatively limited. That is expected to change.

The authors focus on three areas in which patients are likely to engage with AI now or in the future. Telehealth, currently a platform for patient-to-clinician conversations, may use AI to shorten wait times and collect patient data before and after appointments. Remote monitoring of patients' health may be enhanced by AI systems that analyze information reported by patients themselves or collected by wearable devices. Health coaching can employ AI – including natural language models that mimic human interactions – to provide personalized health advice, education, and psychosocial support.

For all its potential in these areas, AI also poses a variety of ethical challenges, many of which have yet to be adequately addressed, the authors write. Telehealth and remote health monitoring, for example, pose inherent risks to confidentiality when patient data are collected by AI. And as autonomous health coaching programs become more human-like, there is a danger that actual humans will have less oversight of them, eliminating the person-to-person contact that has traditionally defined cancer medicine.

The authors cite several principles to guide the development and adoption of AI in patient-facing situations – including human dignity, patient autonomy, equity and justice, regulatory oversight, and collaboration to ensure that AI-driven health care is ethically sound and equitable.

"No matter how sophisticated, AI cannot achieve the empathy, compassion, and cultural comprehension possible with human caregivers," the authors assert. "Overdependence of AI could lead to impersonal care and diminished human touch, potentially eroding patient dignity and therapeutic relationships."

To ensure patient autonomy, patients need to understand the limits of AI-generated recommendations, Kelkar says. "The opacity of some patient-facing AI algorithms can make it impossible to trace the 'thought process' that lead to a treatment recommendation. It needs to be clear whether a recommendation came from the patient's physician or from an algorithmic model raking through a vast amount of data."

Justice and equity require that AI models be trained on data reflecting the racial, ethnic, socioeconomic mix of the population as a whole, as opposed to many current models, which have been trained on historical data that overrepresent majority groups, Kelkar remarks.

"It is important for oncology stakeholders to work together to ensure AI technology promotes patient autonomy and dignity rather than undermining it,” says senior author Gregory Abel, MD, MPH, Director of the Older Adult Hematologic Malignancy Program at Dana-Farber and a member of Dana-Farber’s Division of Population Sciences.

The co-authors of the paper are Andrew Hantel, MD; Corey Cutler, MD; and Marilyn Hammer, PhD, DC, RN, FAAN; and Erica Koranteng, MBChB, MBE, all of Dana-Farber.

Related Videos
Nancy U. Lin, MD, associate chief, Division of Breast Oncology, Susan F. Smith Center for Women’s Cancers, director, Metastatic Breast Cancer Program, director, Program for Patients with Breast Cancer Brain Metastases, Dana-Farber Cancer Institute; professor, medicine, Harvard Medical School
Marc Machaalani, MD
Kimmie Ng, MD, MPH
Toni Choueiri, MD, director, Lank Center for Genitourinary Oncology, co-leader, kidney cancer program, Dana-Farber Cancer Institute; Jerome and Nancy Kohlberg Chair, professor, medicine, Harvard Medical School
Kimmie Ng, MD, MPH
Ann H. Partridge, MD, MPH
Alicia Morgans, MD, MPH
Paolo Tarantino, MD
Toni Choueiri, MD, director, Lank Center for Genitourinary Oncology, co-leader, kidney cancer program, Dana-Farber Cancer Institute; Jerome and Nancy Kohlberg Chair, professor, medicine, Harvard Medical School
Nancy U. Lin, MD, associate chief, Division of Breast Oncology, Susan F. Smith Center for Women’s Cancers, director, Metastatic Breast Cancer Program, director, Program for Patients with Breast Cancer Brain Metastases, Dana-Farber Cancer Institute; professor, medicine, Harvard Medical School