The Effect of a No-Denial Policy on Imaging Utilization By: Jeffrey D. Robinson, MD, Daniel S. Hippe, MS, Mark D. Hiatt, MD, MBA, MS

According to the US Government Accountability Office, Medicare expenditures for advanced diagnostic imaging doubled between 2000 and 2006.1 Similar increases have been observed in the private sector as well.2 Payers have used utilization management for many years to limit these costs.3 Its appeal to payers is intuitive: If appropriate imaging is being done, costs decrease by the amount of unnecessary imaging eliminated. From providers’ perspectives, its effect is equally intuitive, although hardly appealing: If approval for imaging is difficult to obtain, then borderline, unusual, or outlier cases will not get imaged unless an inordinate amount of time is spent on the process.

Competing claims over the effectiveness of these measures are difficult to evaluate, in large part because the data are proprietary. Mitchell and Lagalia4 and Levin et al2found data to support cost reductions after the implementation of radiology benefits management (RBM). However, Rosenberg et al5 showed no difference in the utilization of diagnostic or surgical procedures between a group of patients who underwent sham reviews in which automatic approval was granted for requested procedures and a group of patients who underwent actual review. Lee et al6 modeled a typicalRBMprocess and concluded that the net effect was to shift costs to providers, without clearly demonstrating net savings to society as a whole.

The AMA, in a report onRBM in 2009,7 found that

the main concerns physicians report with the use of [radiology benefits managers] are denial or delays of payment for medically warranted imaging studies; lack of proper administrative cost assessments; inconsistent rules and practices; lack of clinical guideline transparency; interference in the patient-physician relationship; acceptance of tests or studies contingent upon referral to other physicians or practicegroups; and forced test substitution. [Emphasis added]

A collaborative utilization management system that does not deny payment, interfere with the doctor-patient relationship, or force test substitution could reduce the friction associated with utilization management without necessarily increasing expenditures on imaging. To test the impact of such a system, in this study, we followed utilization and other operational metrics in four markets of a national health insurer that switched from the traditional to the more collaborative model(experimental markets) and four markets that were already using the collaborative approach (control markets).


RBM Process
HealthHelp provides diagnostic imaging utilization management services to health insurance plans. Its authorization process has been fully described previously2but has similarities to other RBM companies. An extensively developed system of rule sets screens examinations with appropriate indications for approval. A three-tiered review system (Fig. 1) promotes the efficient use of provider resources. If an examination request meets guidelines, a customer service representative routinely approves it. If not, a nurse reviewer calls the provider’s office to try to elicit enough information to approve the examination, cause the provider to select a study that meets guidelines, or withdraw the request. Failing that, the request is escalated to a radiologist reviewer, who conducts a peer-to-peer consultation to resolve the request. Alternate outcomes to examination approval in this system include “examination modification,” in which the provider and reviewer agree to an alternate imaging strategy; “not done” because the provider decides to withdraw the request, perform a lower intensity imaging study not requiring preauthorization (such as ultrasound or radiography), or cancel an examination ordered in error; “no consensus,” in which the provider and reviewer agree to disagree, an authorization is issued, and the examination is covered by the health plan; and “no callback,” when the provider does not return the call of the reviewer in two business days, which is considered equivalent to a withdrawal. There is no so-called hard stop, in which a request is denied, triggering a denial letter and possibly an appeal. A provider who calls back after a request has been closed can have it reopened without prejudice.

Fig 1.Collaborative review algorithm. CSR = customer service representative.

Fig 1

Patient Population
The patient population consisted of all adult and pediatric covered lives of a national health insurer in eight of its twenty-five regional US markets. The majority of the markets in which this insurer operates have used the fully collaborative consultation model since 2005. However, in four midwestern metropolitan markets, the traditional approach had been maintained in the case of no consensus between the provider and reviewer, which included an authorization denial and access to an appeals process for the determination. These four markets formed the experimental group. Four markets matched for geographic proximity, monthly membership level, and utilization formed the control group. On the basis of the results from its other geographic markets, the insurer agreed to allow the four experimental markets to stop requiring denials and provisionally implement the collaborative consultation approach on October 1, 2010. That is to say, peer-to-peer consultations that ended without consensus had previously gone through the denial process but would now be approved, noting “no consensus” in the database. Although there was no formal announcement of the change in procedure, peer-to-peer consultations took on a different tenor, with both nurse-level and physician-level reviewers explaining the new protocol at every opportunity directly to providers. The time between the beginning of the observation period (January 2009) and the initiation of the new protocol (October 1, 2010) was the baseline period (twenty-one months), and the subsequent period through January 2012 was the follow-up period (sixteen months). Although the baseline and follow-up periods were not of equal lengths, the longer baseline period provided additional trending data.

Data Analysis
We tabulated advanced imaging requests and review process outcomes from the experimental markets in the follow-up period and compared the utilization rates (measured in units per 1,000 covered lives per month) and the rates of request approval, examination modification, withdrawal, and no consensus after peer-to-peer consultation with these rates in the same markets in the baseline period as well as the control markets in both baseline and follow-up periods.

No individually identifiable personal health information was used in this study.

Statistical Analysis
The time series of utilization and peer-to-peer consultation metrics for both experimental and control markets were explored graphically. A two-stage process was used to estimate and statistically compare temporal trends within markets and between the experimental and control markets. For the first stage, the correlation between and within corresponding pairs of time series from both markets were estimated using first-order vector autoregressive models.8 In the second stage, linear regression analysis was used to estimate trends versus time, changes in these trends between the baseline and follow-up periods, and differences in trends between the experimental and control markets. Regression coefficients were estimated via generalized least squares using the correlation estimates from the first stage to account for any interrelationship between and within the series.9 All confidence intervals (CIs) and significance tests were performed within the generalized least squares framework.

Average values of each metric during the baseline period were compared between experimental and control markets. In addition, the averages before and after program initiation were compared within each market, with the changes denoted by ΔE and ΔC. Last, experimental market changes were also adjusted by the corresponding changes in the control markets. Specifically, the differences in changes between markets (ΔEC= ΔE – ΔC) were computed and tested. It was assumed that any parallel changes in the experimental and control markets were due to exogenous factors that broadly affect multiple markets, such as the national economy, so the excess mean changes in the experimental markets (ΔEC) were considered to be better indicators of the change due to the new program.

All statistical calculations were performed using the statistical software R version 2.14.1 (R Project for Statistical Computing, Vienna, Austria), with the following packages: mAr 1.1-210 and MASS 7.3-16.11 Throughout, two-sided tests were used, with P values<.05 denoting statistical significance.


The experimental group averaged 437,388 covered lives over the entire observation period, including all ages and genders. The control group averaged 253,296 covered lives, also including all ages and genders. The numbers of requests evaluated totaled 247,117 and 145,278 and averaged 6,679 and 3,926 per month for the experimental and control groups, respectively. Table 1 summarizes utilization trends in terms of requests per 1,000 members and the overall approval rate, which counts only explicit approvals. There were no significant differences in the utilization rate at the beginning of the baseline period or growth rates in requests per 1,000 members between the experimental and control markets (P = .433 and P = .977, respectively). The initial approval rate was lower in the experimental markets compared with the control markets (94.6% vs 95.2%, P= .049), although the corresponding growth rates were not significantly different (P= .751).

Table 1

Because the peer-to-peer consultation was the step in the workflow that was modified for the new collaborative program, we focused on corresponding metrics. Table 2 summarizes these data during the baseline period. The consultation rates (100 Xnumber of consultations/number of requests) were similar between the experimental and control markets (2.27% vs 2.36%, P= .306), while the percentage performed after consultation (68.3% vs 62.5%, P= .001) and the approval rate after consultation (62.6% vs 56.2%, P= .008) were significantly higher on average during the baseline period in the experimental markets. The no-consensus rate (1.29% vs 3.61%, P<  .001) and withdrawal rate (30.4% vs 34.0%, P =  .008) were significantly lower in the experimental markets during the same period.

Table 2

The average changes in each metric for both markets are summarized in Table 3. The growth rate in requests per 1,000 members decreased significantly in the experimental markets relative to the control markets (ΔEC=―0.10, P = .050; Fig. 2), while the growth in the approval rate changed minimally (ΔEC  = ―0.01, P = .870; Fig. 3). There were small but insignificant increases in the peer-to-peer consultation rate in the experimental and control markets (ΔE= 0.46%, P= .202, and ΔC= 0.55%, P = .138), while the excess change was slightly negative (ΔEC = ―0.09, P = .453).

Table 3

Fig 2.Utilization in terms of requests per 1,000 members. Dotted lines indicate the fitted trends. The vertical dashed line indicates when the new program was initiated.

Fig 2

Fig 3. Utilization in terms of the approval rate, which includes only those requests explicitly approved without modification or disagreement. Dotted lines indicate the fitted trends. The vertical dashed line indicates when the new program was initiated.

Fig 3

The percentage performed after consultation increased significantly in the experimental markets (ΔE = 4.00%; 95% CI, 0.69% to 7.31%), but the control markets increased by 3.61% and the excess change (ΔEC= 0.39%; 95% CI, ―4.92% to 5.71%) was not significant. The increase in percentage performed was due primarily to an increase in the no-consensus rate (ΔE= 2.70%; 95% CI, 2.29% to 4.63%) and a decrease in the withdrawal rate (ΔE= 2.70%; 95% CI, ―5.69% to 0.29%). After adjusting for the changes in the control markets, neither the change in no-consensus rate (ΔEC= 1.16; 95% CI, ―0.74 to 3.06) nor the change in withdrawal rate (ΔEC = ―2.67; 95% CI, ―6.69 to 1.36) was significant.


We found that compared with the control markets, the utilization growth rate (measured as requests per 1,000 covered lives) tended to decrease in the experimental markets after eliminating denials (ΔEC= ―0.10 per month, P = .050), which is inconsistent with the view that moving to the collaborative model from an adversarial one would necessarily lead to increased utilization. The approval rate was unchanged (ΔEC = ―0.01% per month, P = .870).

Decision support is a key tool in utilization management. Lehnert and Bree 12 found a 26% rate of inappropriate CT and MRI requests from primary care providers. Commercial decision support products and computerized physician order entry have been shown to improve ordering patterns13 but are typically not well integrated into physician workflows, precluding their effective use.14

The collaborative approach used by HealthHelp differs from the other national RBM companies (American Imaging Management, CareCore National, MedSolutions, and National Imaging Associates), which all principally and ultimately include denial provisions in their algorithms. In this circumstance, the payer indicates prospectively that it will not pay the provider for the service in question and that the provider performs the procedure at the patient’s own financial risk. Federal law15mandates that an appeal process be available, and subsequent court decisions have clarified the requirements of such processes. The adversarial potential of the process is highlighted by the process of “deauthorization,” in which a payer denies payment because of differences between the authorized procedure and the one actually performed.16, 17

We hypothesized that a collaborative radiologist consultation without the threat of denial of payment would not result in higher utilization by ordering physicians who learn that the payer “won’t say no.” The educational process, in which a knowledgeable imaging consultant provides rationales for appropriate imaging, would counterbalance the natural tendency of people to get as much as they can until someone says “no.” In the experience of one of the authors (J.D.R.), the questions over time become less about whether to image or not and more about which test is most appropriate. The experimental markets studied were uniquely suited to test this hypothesis because the only change that occurred in these markets was the elimination of the denial provision. The control group served to account for the countless immeasurable external forces in the health care system that may have affected the behavior of the populations in general.

During the baseline period, providers operating under the denial model had a significantly higher rate of examinations performed (either as ordered or modified) and a significantly lower withdrawal rate than the control group (Table 2). Although this study did not attempt to address the reasons for this finding, we suspect that in the denial environment, both provider and reviewer feel some pressure to avoid the issuance of a denial.

During the follow-up period, utilization in both the experimental and control groups declined, as measured by requests per 1,000 covered lives, with utilization in the experimental group declining at a significantly greater rate (P = .050). These data in general correlate with other reports of a flattening of the growth curve of imaging,18, 19 although a cause for the difference in rates of decline between the two groups in our study is not apparent.


Because this was a natural experiment, not all factors could be controlled. Providers may have been dealing with different payers who had varying precertification requirements; however, in the markets chosen, this health plan was the dominant provider, and its policies would be encountered on a frequent, if not daily, basis by providers in the study. There were no other new utilization initiatives under way by HealthHelp or any other groups in the areas observed during the trial period, as far as we are aware, to confound changes in physician behavior. The control group served to control for the myriad outside influences affecting the American health care system in general, such as the concurrent economic slowdown, organized radiology’s Image Wisely campaign, and the passage of health care reform, any or all of which may have contributed to the general slowdown in imaging nationwide. Regional factors such as a particularly hard-hit local economy or natural disaster were not evaluated. Because this study monitored only insured patients, the loss of insurance would have minimal effect on our data when normalized to procedures per 1,000 covered lives per month.

Although we did not find significant increases in utilization or collaboration metrics relative to the control group, this does not prove that there was zero effect. However, with the length of the observation period, these results suggest that if there were increases, they were relatively small compared with the month-to-month variation. The CIs can be consulted for a more quantitative assessment of possible effect size.

Finally, we examined utilization rates from the perspective of examination requests rather than directly measuring examinations performed or the actual payments made for those examinations. However, because the study focused on the ordering patterns of providers, request data were more appropriate to analyze.

The collaborative approach to utilization management has several benefits over the adversarial approach. Primarily, it fits well with the underlying model of medical management that is collegial, consultative, and relatively nonhierarchical between physicians. It acknowledges the individuality of patients, as well as the importance of the doctor-patient relationship. By reducing the stress of the interaction, this approach can help diffuse some of the suspicion and hostility that have grown between providers and payers of health care and creates a space for a discussion about evidence-based medical practices.

On the other hand, the collaborative approach is initially more physician intensive. An unrequested consultation can be intrusive, especially if a clinician’s staff pulls him out of a room to take the call. Many providers, especially ones in primary care, are ordering examinations at the request of a specialist or on the recommendation of a radiologist as a result of a prior imaging study. They are reluctant to override a documented recommendation from a physician they know in favor of a stranger on the phone. These recommendations may be made in good faith but not necessarily in accordance with evidence-based guidelines.20


We recognize that any external attempt to change behavior meets with resistance, and in that sense, prior authorization cannot help but be adversarial to some degree. These programs do have a positive effect on utilization management by increasing the use of evidence-based criteria. The physician-to-physician component of the preauthorization process does not have to be adversarial, however. Relying on the collaborative nature of the radiologic consultation rather than potential denial of imaging requests does not increase utilization. In addition to utilization, the actual impact on cost of the new program should be studied to give a more complete picture of its advantages and disadvantages.


  • Preauthorization for advanced imaging serves to limit utilization.
  • Collaborative utilization management differs from traditional utilization management in that there is no denial provision. At the end of the process, an imaging request is granted if the provider still wants it.
  • Collaborative utilization management does not result in increased provider utilization of advanced imaging over an extended observation interval.

1. US General Accountability Office. Medicare Part B imaging services: rapid spending growth and shift to physician offices indicate need for CMS to consider additional management practices. Washington, District of Columbia: US General Accountability Office; 2008.
2. Levin DC, Bree RL, Rao VM, Johnson J. A prior authorization program of a radiology benefits management company and how it has affected utilization of advanced diagnostic imaging. J Am CollRadiol 2010;7:33-8.
3. Otero HJ, Ondategui-Parra S, Nathanson EM, Erturk SM, Ros PR. Utilization management in radiology: basic concepts and applications. J Am CollRadiol 2006;3:351-7.
4. Mitchell JM, Lagalia RR.Controlling the escalating use of advanced imaging: the role of radiology benefit management programs. Med Care Res Rev 2009;66:339-51.
5. Rosenberg SN, Allen DR, Handte JS, et al. Effect of utilization review in a fee-for-service health insurance plan. N Engl J Med 1995;333:1326-30.
6. Lee DW, Rawson JV, Wade SW. Radiology benefit managers: cost saving or cost shifting? J Am CollRadiol 2011;8:393-401.
7. McAneny BL. Report 5 of the Council on Medical Service (I-09): radiology benefits managers. Chicago, Illinois: American Medical Association; 2009.
8. Brockwell P, Davis R. Introduction to time series and forecasting. 2nd ed. New York, New York: Springer; 2002.
9. Pinheiro J, Bates D. Mixed-effects models in S and S-PLUS. New York, New York: Springer; 2000.
10. Barbosa SM. mAr: Multivariate AutoRegressive analysis. Available at: Accessed February 20, 2013.
11. Venables W, Ripley B. Modern applied statistics with S. 4th ed. New York, New York: Springer; 2002.
12. LehnertBE, Bree RL. Analysis of appropriateness of outpatient CT and MRI referred from primary care clinics at an academic medical center: how critical is the need for improved decision support? J Am CollRadiol 2010;7:192-7.
13. Solberg LI, Wei F, Butler JC, Palattao KJ, Vinz CA, Marshall MA. Effects of electronic decision support on high-tech diagnostic imaging orders and patients. Am J Manag Care 2010;16:102-6.
14. Bairstow PJ, Persaud J, Mendelson R, Nguyen L. Reducing inappropriate diagnostic practice through education and decision support.Int J Qual Health Care 2010;22:194-200.
15. US Department of Labor. How to file a claim for your benefits. Available at: Accessed February 20, 2013.
16. Duszak R. Deauthorization: the insidious new payer trick. J Am CollRadiol 2006;3:713-5.
17. Garner DC. Re: Deauthorization: the insidious new payer trick. J Am CollRadiol 2006;3:966-8.
18. Levin DC, Rao VM, Parker L. The recent downturn in utilization of CT: the start of a new trend? J Am CollRadiol 2012;9:795-8.
19. Martin AB, Lassman D, Washington B, Catlin A. Growth in US health spending remained slow in 2010; health share of gross domestic product was unchanged from 2009. Health Aff (Millwood) 2012;31:208-19.
20. Esmaili A, Munden RF, Mohammed TL. Small pulmonary nodule management: a survey of the members of the Society of Thoracic Radiology with comparison to the Fleischner Society guidelines. J Thorac Imaging 2011;26:27-31.

Contact Us
Questions? Contact us to find out how our programs can create savings for you.
Please note: If you are communicating as a health plan member, please contact your healthcare provider. HealthHelp does not provide individual healthcare advice.