Publication
Article
Author(s):
There has been limited discussion in the cancer literature for how peer review among the multiple oncology journals and international cancer conferences is conducted.
Clinical research published in peer-reviewed general medical and oncology literature is a core component of the process during which advances in care are developed and ultimately instituted as standard of care. Although it is doubtful that many would challenge this statement, there has been limited discussion in the cancer literature for how peer review among the multiple oncology journals and international cancer conferences is conducted.
In the opinion of this commentator, it is not inappropriate to inquire as to just how effective this process is in ensuring both the objectivity and lack of bias of published manuscripts or in the selection of the top abstracts chosen for presentation at national or international oncology meetings. There is no intent in this short commentary to present a formal critique of the current state-of-oncology peer review; however, concerns with the relatively recent publication of 2 articles in high-impact medical journals and 1 widely discussed abstract presented at a major oncology meeting serve to highlight the need for greater attention paid by the academic oncology community to this essential component of the scientific review process.
To be clear, there is no intent in focusing on these manuscripts to negate the hypotheses underlying these papers or the likely considerable efforts undertaken by the authors in the conduct of their research projects. Rather, it was the peer review or editorial decision-making processes that should have challenged the methodologies and conclusions and either suggested additional explanations/discussion within the manuscript, alternative approaches to examine the questions posed, or perhaps even to reject the submissions.
The first paper attempted to examine the “impact of facility surgical volume on survival of patients with cancer.”1 Although this is a relevant question for multiple audiences, including patients, families, and payers, the methodology employed in this paper is quite concerning. Investigators conducted a retrospective analysis of patients with multiple tumor types in the National Cancer Database over a 10-year period. Patients were included “if they received surgery as treatment of their cancer and had valid survival information available from 2004 [to] 2013.” Multiple prognostic factors were included in the analysis.
The investigators found that “patients who received surgery from low-volume facilities vs very high-volume had the worst survival probability.” The major problem here is that this analysis does not take into consideration what happened to patients over the months or years during which they may have received other treatments, including from centers with treatment volumes vastly different from where the original surgery was performed. This invites questions such as: Is it appropriate to assign the ultimate survival outcome to the initial surgery for a patient who may have lived many years (> 4 years), experiencing 1 or more recurrences and receiving multiple treatment regimens during her/his cancer journey?
Authors of the second manuscript attempted to explore whether patients with cancer who receive complementary medicine are interested in continuing with or adhering to conventional cancer therapies and to compare survival outcomes with those who did not receive complementary medicine.2 The authors also used data from the National Cancer Database in their analysis. Rather remarkably, the investigators defined complementary medicine as “other unproven cancer treatments administered by nonmedical personnel” that was administered in addition to any conventional cancer therapy as noted in the patient record.2
A total of 258 individuals (0.01% of the total population; N = 1,901,815) were grouped into the complementary medicine cohort. The authors compared outcomes of this extremely small—and certainly underrepresentative—population of individuals who participated in some form of complementary medicine with those who did not receive complementary treatments to conclude that such patients “were more likely to refuse additional conventional cancer treatment.”2 Finally, they wrote, “The greater risk of death associated with [complementary medicine] is therefore linked to its association with treatment refusal.” This conclusion is based on an analysis of 0.01% of the entire patient population. Somehow this paper was accepted by peer review, a decision supported by the journal’s editors.
Finally, we come to a recently reported abstract at a major oncology meeting addressing a difficult but important subject: sexual harassment.3 For anyone reading this commentary, I need to make very clear my sole concern with this abstract relates to the methodology employed, rather than the subject matter.
This abstract, which intended to “investigate the incidence and impact of workplace sexual harassment experienced by physicians” through a “targeted social media outreach to examine the prevalence and types of sexual harassment,” included a total of “271 respondents, 250 physicians in practice, and 21 residents/fellows.”3 The authors reported that in the past year, incidences of sexual harassment by institutional insiders (peers/superiors) was reported by 70% of oncologists; of the individuals surveyed, 80% of women respondents reported sexual harassment and 56% of men reported at least 1 incident.3
If an accurate observation, this is a remarkable and most distressing finding; however, it must be noted that this sample size of 271 oncologists represents approximately 2% of oncologists practicing in the United States.4
The point to be made in criticizing this abstract is directed at the reported methodology and not the fundamental message being delivered. The authors provide no documentation to demonstrate that this is an objectively representative sample of the oncology physician population, and it is reasonable to suggest that those responding to the survey may have been more likely to have experienced sexual harassment or, alternatively, may have for their own reasons failed to be truthful in their responses.
In scientific communication, the underlying message must be linked to the soundness of the methodology employed. In the opinion of this commentator, the message of the 3 highlighted manuscripts is muddled by the failure to pay necessary attention to the procedures employed in the clinical research being presented by the authors, peer reviewers, and editorial leadership.