Article

NCCN's Evidence Blocks Make the Best of Imprecise Drug Data

Author(s):

The NCCN's Evidence Blocks need further data and refinement to overcome subtleties of distinction in drug performance, but they’re the best value system developed so far, NCCN officials reported at their 2016 Annual Conference.

Robert Carlson, MD

The National Comprehensive Cancer Center’s Evidence Blocks need further data and refinement to overcome subtleties of distinction in drug performance, but they’re the best value system developed so far, NCCN officials boasted at their 21st Annual Meeting in Hollywood, Florida.

Those blocks—which come in the form of graphical representations for safety, efficacy, quality of evidence, consistency of evidence, and affordability of drugs—do a better overall job of informing treatment decisions than other value tools that have emerged recently, said Robert W. Carlson, MD, CEO of the NCCN.

“We believe Evidence Blocks are a basis for value decisions at the point of care,” Carlson said.

The NCCN began issuing Evidence Block guidelines last year based on its general guidelines for cancer care. “We have a commitment that by the end of 2017 that for all systemic therapy guidelines we will have Evidence Blocks associated with them, and we will be moving to include Evidence Blocks for radiation oncology, diagnostics, and surgical oncology procedures in the future,” Carlson said.

Contrasting Evidence Blocks with other tools, Carlson said ASCO’s new Value Framework for drug comparisons lacks the flexibility to allow comparisons of drugs evaluated in different trials and does not allow assessments of the range of interventions possible across multiple disease scenarios. The DrugAbacus system created by doctors at Memorial Sloan Kettering Cancer Center focuses on costs rather than multiple other variables that go into treatment decisions, Carlson said.

“It’s a system primarily of value to payers,” he said. “It may be of value to manufacturers, for them to price an agent, but I’m not sure how valuable it is to the patient just to know the dollar cost.”

ASCO’s system for drug valuation got a word of support from Douglas W. Blayney, MD, who served on ASCO’s task force assigned to develop the system and who was in the audience. “ASCO has tried to look at a very evidence-based approach—maybe too much so, but it has advanced the conversation,” he said.

A tool developed by the Institute for Clinical and Economic Review (ICER), which included industry representatives, payers, ASCO, and patient groups, provides a “fairly rigorous” evaluation of treatment possibilities, with a value-based benchmark provided, and is an interesting model, Carlson said, though one limitation is that it seems biased in that it rates a drug as more affordable the smaller the population that might need it.

“I’m not sure that that makes sense if I were an individual,” Carlson said. Costs being the same, if 10 million people were to need a drug, it would be rated less affordable than if 10 people were to need it, he explained.

In a presentation on the value of Evidence Blocks in the renal cell carcinoma setting, Eric Jonasch, MD, of The University of Texas MD Anderson Cancer Center, said that more data is needed for fine-tuning of treatment decisions.

“The Evidence Blocks are beginning to create what I would call a succinct interpretation of the data which can begin to generate a dialog between the patients and the treatment teams,” he said. “The other thing that’s very clear is that ongoing refinement of these Evidence Blocks is necessary. This is something that we’re beginning to get our heads around: what defines quality of response? What defines relative toxicity? When it comes to affordability, what does affordability mean? And affordability for whom, and how do we measure that? How do we improve that for the various agents that are available for renal cell carcinoma, I think, is a critical thing that we need to talk about.”

It was a point that resonated with the audience. During the question and answer session, audience members asked how affordability is currently measured when Evidence Blocks are formulated.

Carlson agreed that costs are relative, and responded with an anecdote from his practice about two patients who were recommended to take an aromatase inhibitor and palbociclib for metastatic disease, and their oncologists advised them to first check with their insurance companies about coverage. The payers were willing to cover $9500 per month for palbociclib, and the patients were able to find assistance programs for $490 of the rest, leaving them with out-of-pocket costs of $10.

However, it was pointed out that the bulk of the money comes out of somebody’s pocket whether it’s the patients, payers, or assistance programs. “Somebody’s paying the premiums,” Blayney said.

The discussion also touched on how the NCCN has been dealing with the subtlety of distinctions in drug performance, and how it reliably incorporates those into its drug valuation frameworks.

Carlson noted that NCCN Guidelines are updated as many times a year as are necessary to stay current with “practice-changing” developments as revealed by trials and other data. Such updates take 10 days on average, with a record of 24 hours in one instance.

That said, trial results are not always apples-to-apples comparisons, Carlson said.

“Our challenge was to consider maybe an 80% response rate versus a 70% response rate versus 60%. If you look at studies they don’t always have the same patient populations, they often don’t use the same system for measuring response, and they don’t have the same duration of follow-up. They don’t have the same sample size. The statistical variation is going to different. You end up in a situation where 70% may be the same as 60%, and maybe the same as 50%, because of the differences in study design and patient population. And so, we wanted a system that was—I wouldn’t say dirty—but I would say imprecise, or at least that would acknowledge the imprecision of the data that we’re looking at. We’re trying to get the metrics in the right ballpark, in the right sort of big bucket, rather than trying to split hairs,” Carlson said.

<<<

View more from the 2016 NCCN Annual Conference

Related Videos
Albert Grinshpun, MD, MSc, head, Breast Oncology Service, Shaare Zedek Medical Center
Erica L. Mayer, MD, MPH, director, clinical research, Dana-Farber Cancer Institute; associate professor, medicine, Harvard Medical School
Stephanie Graff, MD, and Chandler Park, FACP
Mariya Rozenblit, MD, assistant professor, medicine (medical oncology), Yale School of Medicine
Maxwell Lloyd, MD, clinical fellow, medicine, Department of Medicine, Beth Israel Deaconess Medical Center
Neil Iyengar, MD, and Chandler Park, MD, FACP
Azka Ali, MD, medical oncologist, Cleveland Clinic Taussig Cancer Institute
Rena Callahan, MD, and Chandler Park, MD, FACP
Hope S. Rugo, MD, FASCO, Winterhof Family Endowed Professor in Breast Cancer, professor, Department of Medicine (Hematology/Oncology), director, Breast Oncology and Clinical Trials Education; medical director, Cancer Infusion Services; the University of California San Francisco Helen Diller Family Comprehensive Cancer Center
Virginia Kaklamani, MD, DSc, professor, medicine, Division of Hematology-Medical Oncology, The University of Texas (UT) Health Science Center San Antonio; leader, breast cancer program, Mays Cancer Center, UT Health San Antonio MD Anderson Cancer Center