Publication

Article

Oncology Live®

Vol. 19/No. 11
Volume19
Issue 11

Reinventing Benchmarking for the Value Era

Author(s):

Benchmarking has historically been difficult in oncology because of barriers to the flow of information and the complexities of care. However, such performance comparisons are now becoming a part of payment models, and practices realize they need comparative data to understand how well they perform.

Robert “Bo” Gamble

As more and more emphasis is placed on value of care versus volume of care, oncology practices are increasingly looking for ways to compare their performance with that of their competitors. Benchmarking, as this is called, has historically been difficult in oncology because of barriers to the flow of information and the complexities of care. However, such performance comparisons are now becoming a part of payment models, and practices realize they need comparative data to understand how well they perform relative to their competitors and whether there is room for improvement.

The newly established Merit-based Incentive Payment System (MIPS) from CMS uses comparative data to determine whether practices have improved their performance and should receive financial incentives or be penalized. CMS’ Oncology Care Model (OCM) also collects practice data that are based on a variety of performance measures; in turn, practices receive comparative information that enables them to understand how well they are controlling expenses and managing patient outcomes.

But a problem with this information from CMS, practices say, is that its delivery is delayed months after the actual performance periods being measured, and much additional information that would be useful is not available. This recognition of the need for comparative data—in real time—has convinced a growing number of oncology practices to begin sharing anonymized data to see how they measure up to others.

“There’s currently no way to create many of the benchmarks that practices most covet, because they require data from some outside source like Medicare, so the information you need either isn’t available at all or it comes 9 months late, which is nearly as bad,” according to Robert “Bo” Gamble, director of Strategic Practice Initiatives at the Community Oncology Alliance (COA). “The good news is that there are plenty of valuable benchmarks that can be compiled with data that practices have, and we are just beginning to tap their potential for operational improvement.”

Figure. Anonymized Benchmarks for an Individual Practice

Strictly defined, benchmarks are standardized measures of performance that businesses have long used to compare themselves with competitors, as well as to compare current and past performance. They have found their place in medicine—“best hospital” lists all compare institutional performance on standard metrics—but benchmarking remains rare among independent oncologists. Gamble estimates that no more than 10% of COA’s members have engaged in any benchmarking beyond in-house analyses of a current year’s performance in comparison with performances in years past.COA has launched a program called COAnalyzer to help practices that already use benchmarks and entice others to give them a try. Any practice that gives anonymized information about its own performance on a variety of metrics will be able to see how its numbers fit into the range of figures supplied by other participating practices.

The first benchmarks from COA focus on oral prescription metrics, such as how fast each practice’s pharmacy fills orders, how likely its patients are to refill prescriptions promptly, and how long inventory sits on the shelves. This information can provide valuable insight into needed improvements. For example, a practice that uses those benchmarks and finds slow inventory turnover may be tying up too much capital by prepurchasing too many expensive drugs. A practice with an unusually large percentage of patients who do not refill prescriptions on time will know it must do more to get its patients to take medication as directed.

“As time goes on, we plan to roll out other helpful benchmarks,” Gamble said. “‘Accounts receivable days outstanding’ is a big one. Good practice managers know that number off the top of their heads, but we’re about to give them an opportunity to see how their numbers compare to industry norms. We’re also going to put out numbers for management ratios and staffing ratios.”

Benchmarking may be relatively new to independent oncology practices, but it has already proven its value in many other sectors. The first major company to use benchmarks to compare itself with competitors was Xerox, which calculated in 1979 that its ratio of indirect to direct staff was twice as high as more successful competitors’ and that its copiers had 7 times as many defects as competitors’ did. Comparative data showed where the struggling company should focus its improvement efforts and facilitated a dramatic turnaround. Xerox was able to cut its manufacturing costs in half and its product defects by 67%.

Benchmarking success stories like that helped fuel widespread adoption, first in the industrial sector and then in many other parts of the economy. A Bain & Company survey of 6323 companies in 40 nations found that by the year 2001, managers considered benchmarking the second most important tool (after planning).

Although benchmarking remains rare in oncology, COA isn’t the only private sector organization to support the practice. Each year, the Oncology Management Consulting Group publishes the National Oncology Benchmark Study (NOBS), which provides comparative information for infusion centers and radiation oncology facilities.

Similarly, the American Society for Clinical Oncology (ASCO) offers 3 tools with benchmarking value. The society releases the findings of its annual practice census in State of Cancer Care in America, a report that gives individual practices much information about industry norms. ASCO also conducts an annual Survey of Oncology Practice Operations, which enables participating practices to see how they stack up from a business and operational standpoint. For practices that want more frequent information, ASCO runs a benchmarking collaborative called PracticeNET. Members submit information every month and get quarterly reports that provide their current performance rankings and show how those rankings have changed over time (Figure).

Effective benchmarking tends to use a large number of very specific but simple data points. For example, the data points on the NOBS include not only “drug spend for top 10 cost drugs overall” and “revenue for top 10 drug cost overall” but also expense and revenue figures for the highest-cost drugs for breast cancer, colorectal cancer, leukemia/lymphoma, and lung cancer, as well as for support drugs that don’t treat cancer directly. The data also include “patients per nutritionist,” “chairs per registered nurse,” and 100 other metrics.

The goal is to minimize the difficulty that participating providers have in identifying where they need to make improvements. The knowledge that a practice is spending too much on medications is valuable, but by itself, that benchmark doesn’t reveal whether the problem stems from a slight overpayment on each medication or a wild overpayment on a handful of products. Breaking down drug spend by tumor type—or parsing any other operational function— usually makes it easier to see where the actual problem lies and how to go about fixing it.

However, sometimes the data are illuminating but not much help in the struggle for improvement, because it takes practical knowledge to effect a solution to a deficiency identified by a benchmarking report.

“We do sometimes have practices that participate in the survey call and say they had no idea they were doing so poorly in some area and [have] no idea how other practices do so much better, and then they ask us if we can put them in touch with a practice that does very well,” said Teri U. Guidi, MBA, chief executive officer of the Oncology Management Consulting Group, which compiles and publishes the NOBS report each year. “They can’t reach out directly because all the data are anonymized, so we start calling high performers on the metric in question, and there’s always someone who agrees to pick up the phone and help.”

In some cases, those conversations reveal that the problem lies not in the “low performing” practice’s inefficiency but in differing interpretations of how to report figures for the benchmark or in fundamental differences between the practices. “It is very difficult at this point to produce oncology practice benchmarks in a way that allows true apples-to-apples comparisons,” said Christian Downs, MHA, JD, deputy executive director of the Association of Community Cancer Centers. “If you look at a benchmarking study and see that the ‘best’ practices only need 2 nurses to do work for which you require 4 nurses, it could be that the other practices are actually twice as efficient, but it’s more likely that they read the reporting standards differently than you or [that] they’re so much bigger, they use specialized support staff to do jobs that smaller practices have to assign to nurses.”

Others interviewed for this story agreed. The complexity of oncology, the data silos, the restrictions on data sharing, the diversity of oncology practices, and the relative novelty of oncology benchmarking all hinder efforts to compile accurate, standardized data that support meaningful comparisons and self-improvement.

“Just to illustrate how far oncology is from the ideal in benchmarking, consider a metric that both payers and patients would like practices to improve upon: the rate of emergency department [ED] visits and hospitalizations during chemotherapy. Most practices cannot find out how often their patients visit the ED, much less benchmark themselves against other practices on that measure,” said Harold D. Miller, president and chief executive officer of the Center for Healthcare Quality and Payment Reform in Pittsburgh, Pennsylvania.

The small minority of practices that participate in an alternative payment model such as accountable care organizations or the OCM are a partial exception to this rule. Such programs offer financial rewards to practices that can reduce undesirable spending such as excessive ED visits, so participating practices do receive some comparative performance data from CMS. Even so, the data often arrive many months after the fact and may not reflect what the practice is currently doing. Practices may find out more quickly for some patients—if they happen to get a phone call from an ED doctor when a patient seeks care—but neither Medicare nor any private payer pays ED physicians or hospitals for spending the time needed to track down a patient’s primary care physician or oncologist, so such calls are rare.

Obstacles like that significantly limit the number of ways that practices can use benchmarks to improve, but the same experts who agreed about the unique difficulties in benchmarking cancer care also agreed that benchmarking could significantly improve operations at most practices. “Any effort a practice makes to improve by looking to see where it is not performing up to the level of its peers is a desirable thing,” Miller said. “However, benchmarking alone isn’t going to produce a revolutionary change in oncology unless complementary changes are also made to care delivery and payment systems. Benchmarking is a necessary tool, but it’s not sufficient.”

Related Videos
John H. Strickler, MD
Brandon G. Smaglo, MD, FACP
Cedric Pobel, MD
Ruth M. O’Regan, MD