Publication
Article
Oncology Live®
Author(s):
Maurie Markman, MD, discusses the increasing criticism of the phase III randomized trial in oncology.
Maurie Markman, MD
Maurie Markman, MD
The relevance of conducting and reporting well-designed, randomized trials to define optimal treatment for patients diagnosed with cancer is not open for debate. For example, modern oncologists could not fathom treating women with advanced epithelial ovarian cancer with cytotoxic chemotherapy without knowledge of preceding, decades-old controlled trial research that defined the central role of platinum-based combination cytotoxic chemotherapy in disease management, or, more recently, the potential of several maintenance strategies. The oncology community eagerly awaits the results of ongoing randomized trials that may further modify our therapeutic paradigms for the benefit of current and future patients.
Although this paradigm has demonstrated success, the traditional phase III randomized oncology trial model has come under increasing criticism, including over the validity of this approach in defining the clinical utility of new antineoplastic pharmaceutical agents and for what should be considered the standard of care in routine cancer management. For instance, the value of such studies is widely recognized as questionable when it comes to relating the findings to the real world of patients with cancer. The drawbacks include the lack of inclusion of more elderly populations that account for the majority of individuals with cancer, the general exclusion of patients with common comorbidities, and the difficulty of conducting randomized trials in molecularly defined subsets that may comprise less than 5% of patients with all but the most common cancer types.One of the more difficult issues with randomized trials is defining endpoints that are clinically meaningful and objectively measurable while simultaneously permitting a statistically valid definitive analysis. As a result, the projected required sample size and the time necessary to confirm or refute a predefined study hypothesis may be either unrealistic or simply not acceptable to patients, clinicians, and society members who are waiting for the answer.
Consider, for example, the highly relevant report from the Women’s Health Initiative on the impact of menopausal hormone therapy for cause-specific and all-cause mortality.1 The 2 studies, initiated in 1993 and 1998, randomized more than 27,000 women to either conjugated equine estrogen (CEE) with medroxyprogesterone acetate (MPA), CEE alone, or placebo. This analysis concluded that “among postmenopausal women, hormone therapy with CEE plus MPA for a median of 5.6 years or with CEE alone for a median of 7.2 years was not associated with risk of all-cause, cardiovascular, or cancer mortality during a cumulative follow-up of 18 years.”1
This is a critically important and reassuring report highlighting the long-term effects of a widely employed, health-related intervention. But one must inquire how relevant these results of a specific intervention (eg, drugs employed, dosages delivered, treatment schedules, duration of therapy, and age of initiation of treatment) are to what is considered routine or standard practice today. Further, will these results become the standard response to questions about the safety of menopausal hormonal therapy, despite current and future changes in routine management, until the results of the next phase III randomized trial are reported in the peer-reviewed literature?Another recent question challenging the relevance of phase III randomized trials of antineoplastic agents is the selection of the control arms with which a novel experimental regimen is compared. In a concerning analysis examining phase III breast cancer clinical trials registered on the ClinicalTrials.gov website that included 229,000 women treated in 210 studies, the therapy tested in the control arms in 29% of the trials was not consistent with National Comprehensive Cancer Center Network guidelines.2 Further, the investigators examined clinical trials recruiting outside of the United States and compared them with German Gynecology Oncology Group guidelines and reached similar conclusions.
Ultimately, the issue here relates to the time required to design, implement, complete, and report the results of phase III randomized studies in the peer-reviewed literature. By the time a given study has overcome its prospectively defined hurdles—such as regulatory approvals, initiation, recruitment, data collection and analysis, and the multiple steps associated with the publication process—the relevance of the outcome in a particular clinical setting may be highly questionable.Finally, we come to the issue of rigid requirements for patient entry into many industry or academic/ cooperative group—designed studies. With only rare exceptions, these trials are associated with multiple and often complex, highly detailed eligibility criteria. Failure to meet a single one of these elements will result in a patient’s exclusion from the trial. The goal of such efforts is to ensure as much homogeneity within the randomized patient populations as possible so that a measured and statistically significant difference in outcome can be ascribed to the intervention being examined and not perceived differences between the patient groups.
However, a recent highly provocative report from SWOG examined the outcomes of patients who were entered into leukemia studies but were later declared to be ineligible due to the subsequent discovery that they had failed to satisfy all of the given trial’s eligibility/ ineligibility criteria.3 This strongly suggests that the current rigidity in such requirements is seriously misguided. Of 2361 patients entered in 13 studies, 247 individuals (10%) were declared to be “ineligible” for a variety of reasons. Of 169 patients included in the analysis, 101 (60%) were considered ineligible due to “missing baseline documentation.” The most important conclusion of this analysis was the failure to find a difference in survival between the eligible and ineligible patient populations (P = .25).
So the question to be asked here is whether such rigid eligibility criteria can be modified to permit a larger percentage of patients to participate in clinical trials, including meaningful randomized studies, to address questions relevant to our patients more efficiently.