Turf issues in medicine affect policy, practice, and, most importantly, patients. This article explores turf issues from several perspectives. The issue and scope of the problem is first discussed, followed by a brief history of turf delineation. Contributing and connected factors are considered, taking into account some consequences of turf battles and their impact on related topics. Finally, this article focuses on proposed strategies to successfully confront the questions, if not overcome the problems encountered. To better inform the deliberation and strengthen the credibility of any conclusions, the evidence and controversies must be regarded from beyond merely the radiology perspective.
- •
Turf issues affect policy, practice, and patients.
- •
Contributing/connected factors include control of patients, self-referral, and overuse.
- •
Turf issues impact quality of care and patient safety.
- •
Proposed strategies for radiology confronting turf issues center on accreditation of facilities; standardization of training; formulation of a structured, clinically rich subspecialty curriculum; multispecialty collaboration in the development of practice guidelines; expansion of radiology research; and endorsement of legislation against self-referral and for certificate of need.
- •
Radiology’s continued relevance depends on a patient-centered focus and maintaining clinical excellence, including specialty and subspecialty expertise.
“Nothing in life is to be feared, it is only to be understood.” –Marie Curie
Introduction
Beyond mere politics, turf issues in medicine affect policy, practice, and, most importantly, patients. This article explores the issue of turf from several perspectives. The discourse begins by defining the issue and scope of the problem, followed by a brief history of turf delineation. Consideration of contributing and connected factors further frame this discussion, which then takes into account some of the consequences of turf battles and their impact on related topics. Finally, concentration focuses on proposed strategies to successfully confront the questions, if not overcome the problems raised by these issues. To better inform the deliberation and strengthen the credibility of any conclusions, the evidence and controversies must be regarded from beyond merely the radiology perspective.
Turf delineation and scope of the problem
A 2002 Web site survey of radiologists identified turf battles as the second largest problem threatening the field, after manpower shortage ( http://www.auntminnie.com ). Radiologists’ claim to turf is predicated on the fact that they are the only physicians whose training includes 4 to 6 years of education dedicated to imaging science, technology, and safety; imaging protocols; image interpretation with clinical correlation; and performance of image-guided procedures. Conversant in the language of all imaging modalities, radiologists are well positioned to designate the appropriate examination for the clinical question, and tailor the protocol for the study.
Debates over turf issues may extend beyond the chair and faculty of a competing specialty to include hospital and medical school administrations, boards of trustees, insurance carriers, and legislators. By expansion, the political debate shifts from the professional arena to include stakeholders well beyond it.
Turf delineation and scope of the problem
A 2002 Web site survey of radiologists identified turf battles as the second largest problem threatening the field, after manpower shortage ( http://www.auntminnie.com ). Radiologists’ claim to turf is predicated on the fact that they are the only physicians whose training includes 4 to 6 years of education dedicated to imaging science, technology, and safety; imaging protocols; image interpretation with clinical correlation; and performance of image-guided procedures. Conversant in the language of all imaging modalities, radiologists are well positioned to designate the appropriate examination for the clinical question, and tailor the protocol for the study.
Debates over turf issues may extend beyond the chair and faculty of a competing specialty to include hospital and medical school administrations, boards of trustees, insurance carriers, and legislators. By expansion, the political debate shifts from the professional arena to include stakeholders well beyond it.
Historical perspective
A brief review of the history of turf delineation is important from the standpoint of understanding the issue at hand, in addition to gaining insight into how to deal with it. Coronary angiography was developed by radiology physicians Judkins and Amplatz in the sixties, and was performed by radiologists through the seventies, but almost none is performed by radiologists today. Although coronary angiography represents the first “lost art” suffered by radiology, it is noteworthy that the initial Resource-Based Relative Value Scale (RBRVS) instituted for Medicare in 1992 allowed payment for interpretations by cardiology and radiology when both were involved in providing the service. At that time, the Current Procedural Terminology (CPT) description of the service was vague enough that two distinct interpretations could be performed and billed. Medicare then developed component coding that separated the “doing” of the procedure from “supervising and interpreting” the procedure, allowing for only a single interpretation of the angiogram. Cardiologists could bill one or both parts, but if cardiology did both, this prevented the radiologist from submitting a billable interpretation. In essence, radiology’s interpretation, and therefore the radiologist, became unnecessary to the process. Furthermore, a current focus of the Centers for Medicare and Medicaid Services (CMS) is to reduce component coding (or in some cases abolish it altogether) and “bundle” the procedural and interpretative portions of certain examinations. Therefore, when two separate specialties perform the separate components, the history of lost coronary angiography could easily be repeated.
Today, turf battles exist throughout the scope of radiology’s practice, in ultrasound (including echocardiography, obstetrics, prostate, vascular, and emergency department studies); skeletal and chest radiography; bone densitometry and cardiac nuclear studies; urinary, musculoskeletal, and vascular interventions; and neurointerventions. In some areas of the country, diagnostic neuroradiology’s high relative value unit procedures are also being claimed by neurologists, neurosurgeons, and orthopedic surgeons. Neuroimaging fellowships developed by neurologists and accredited through non–Accreditation Council for Graduate Medical Education (ACGME) societies have been established in many locales.
These battles also exist outside the scope of radiology practice, such as between orthopedic spine surgeons and neurosurgeons over spine surgery; gastroenterologists and colorectal surgeons over colonoscopy; and dermatology and surgeons over minor plastic procedures.
The fact that physicians work within the economic confines of a “zero sum game” necessarily creates competition and friction among specialties. Historically, this fact derives from the Balanced Budget Act of 1997, and 15 years later continues to fuel turf battles as one of the major contributing factors.
Contributing and connected factors
Control of patients is not the purview of radiologists. In addition, most radiologists lack clinical training and expertise in the medical subspecialties encroaching on their turf. A clear disconnect often exists between the radiologist and the patient, and the radiology report is at best a surrogate for interaction with clinical colleagues. These factors undermine the radiologist’s position in deliberations over turf management ( Box 1 ). They also help explain why the radiologist is often overlooked, an invisible link in the chain that begins with a symptom, leads to diagnosis of disease, and continues through its treatment, and control, if not cure, in follow-up care.
- •
Clinical “control” of patients
- •
Self-referral by nonradiology physicians
- •
Auto-referral by radiologists
- •
Training requirements of nonradiology specialty boards
Self-referral promotes turf battles and is also a major contributor to over use. In a 2003 report to Congress, the Medicare Payment Advisory Commission (MedPAC) reviewed growth in Medicare services from 1999 to 2002 in four categories of service: evaluation and management, medical tests, procedures, and imaging. Imaging was at the forefront of growth, advancing at a rate more than twice that for procedures! Research by Hillman and colleagues showed that self-referred imaging by nonradiologist physicians increased 1.7 to 7.7 times the amount of imaging referred to radiologists by nonradiologist physicians for the same clinical conditions. Similar research was reported by the U.S. General Accounting Office (GAO), clearly unbiased, and certainly not reported from a radiologist’s perspective. The GAO compared imaging use rates according to modality for nonradiologist physicians with in-office imaging equipment versus rates for nonradiologist physicians who referred the imaging to radiologists. For 19.4 million office visits generating 3.5 million imaging studies, they found the rates to be 1.95% to 5.13% higher in the self-referred group. In response to these data, nonradiologist physicians offered rationales for self-referred imaging. Temple argued that the nonradiologist imager can better integrate the clinical data, and further suggested that self-referred outpatient imaging may be less expensive than referral to a hospital radiology department. Burris and Mroczek noted that if requesting physicians had to refer studies rather than perform them, they may not take the time and effort to do so. Grajower stated that patients expect/insist that their own physician perform the imaging. Inconvenience to patients having to take additional time off work and inconvenience to physicians having to wait for results to begin treatment have also been cited in arguments rationalizing self-referral.
Maintaining objectivity, it is only fair to acknowledge that the self-referral finger also has been pointed at radiologists. The term auto-referral has been used to describe this practice, which was studied in a systematic review of 545 consecutive CT scans of the abdomen and in tracking recommendations made for additional imaging. Although these recommendations were made in 19% of cases, they were performed in only 30% of these, or 6% of the entire group. This figure for auto-referral is considerably less than those reported for self-referral by nonradiologists performing imaging on their own equipment. Auto-referral differs from self-referral in the context of viewing the radiologist as a consultant. In fact, initial assessments by clinical consultants often are not definitive, and further workup, including additional testing, is recommended. Similarly, called on to consult through imaging, a radiologist is equally justified in recommending further studies to facilitate diagnosis.
Additional contributing factors to the battle for turf stem from the fact that certain nonradiology subspecialty board examinations require training in various diagnostic and interventional aspects of radiology. This exposure opens the door to encroachment at an early stage of practice, during training. Similarly, allied research, which is common in radiology, offers the opportunity for clinical colleagues to acquaint themselves with advanced imaging techniques on a level that surpasses mere clinical utility/interest. In some cases, primary research within a clinical subspecialty may be ahead of the trajectory that radiology is taking toward the same goal. Finally, as outcomes in health care become increasingly important, clinicians not only adopt imaging metrics but are also seeking to help define them.
Consequences and impact of turf battles
Deeply intertwined in the discussion of turf, the issues of self-referral and overuse raise concern over quality of care and, ultimately, patient safety ( Box 2 ). The accuracy of self-referred interpretations has been questioned, both within and outside the radiology literature. One study compared the readings of 60 chest radiographs with proven diagnoses by three separate panels: the first comprising radiologists with board certification, the second having radiology residents, and the third having nonradiologist physicians. Statistically significant difference among the panels was shown by receiver operating characteristic curve analysis, with the nonradiologist physicians performing least well. Notably, the radiology residents on the second panel had a mean length of training of 2.4 years. A separate study from an academic medical center used four physician panels; the first included faculty radiologists, the second had radiology residents, the third had emergency medicine faculty, and the fourth had emergency medicine residents. They interpreted a film set of 120 radiographs of the chest, abdomen and skeletal system, approximately half of which had clinically significant findings. The study reported an overall accuracy of 80% for radiology faculty, 71% for radiology residents, 59% for emergency medicine faculty, and 57% for emergency medicine residents. Because plain films are considered to be less complex than CT, MRI, or sonography, the differences in each of these studies probably would have been greater if they included the more complex examinations. A review of 555 studies compared head CT interpretations by emergency department physicians to interpretations by radiologists. Misinterpretation by emergency department physicians occurred in 24% of cases, and included missed abnormalities, such as infarcts, masses, cerebral edema, parenchymal hemorrhages, contusions, and subarachnoid hemorrhages. The publication of the Institute of Medicine’s report To Err is Human: Building a Safer Health System focuses attention on the reduction of medical errors and improvement of patient safety. The evidence showing that more accurate interpretations of imaging studies are given by radiologists than nonradiologist physicians should be factored into the debate over who should perform the interpretation. This finding is increasingly important as health care reform efforts attempt to replace volume with value with a focus on quality and safety measures.
- •
Overuse of imaging (by nonradiologist imagers)
- •
Image quality and accuracy of image interpretation (by nonradiologist imagers)
- •
Health care cost increase (from repeat examinations)
Poor quality of services accounts for approximately 30% of health care costs in the United States, to the tune of approximately $390 billion per year. Inadequate image quality leads to repeat examinations, which inevitably results in increased imaging costs. Repeat examinations also inconvenience patients, and may expose them to additional radiation. A quality audit of in-office radiology services conducted by Pennsylvania Blue Shield across various specialties showed that radiology and orthopedic providers had relatively low rates of unacceptable image quality, at 12% to 13% compared with 41% to 82% for internal medicine, pulmonary medicine, podiatry, and chiropractic providers. Image quality is just one aspect of quality care. Blue Cross Blue Shield of Massachusetts addressed additional factors through a site inspection study of more than a thousand outpatient imaging facilities. The additional components of quality considered by the study were staff training and qualifications, equipment specifications and performance, quality control procedures, records management and storage, and safety procedures. Among the sites, 20% failed the inspection, with deficiencies that were deemed to be correctable. An additional 11% of the sites failed inspection because of serious fundamental issues, and were suspended from reimbursement. Inspection pass rates were highest for radiologists, cardiologists, and mobile units at 95%, and lowest for internists, podiatrists, and chiropractors at less than 62%. Recent trends in manufacturing less expensive equipment may result in lower quality. Convenience and financial incentive help sell equipment that is easily available to nonradiologist physicians, such as extremity-only MRI units and handheld ultrasound units. The portability of this equipment easily places it outside of radiology, and further blurs margins of turf.
To ensure the quality and safety of equipment, the American College of Radiology (ACR) has established accreditation programs in 10 different imaging modalities. Since 1987, the ACR has accredited more than 20,000 facilities, including more than 10,000 practices. In 2008, under the Medicare Improvements for Patients and Providers Act (MIPPA), CMS mandated accreditation of certain imaging modalities to qualify for reimbursement. This CMS mandate opened the door for turf battles to extend into the accreditation arena. In 2002, the American College of Cardiology joined with other specialties, including neurology, neurosurgery, and orthopedic surgery, to form the Intersocietal Commission for the Accreditation of Magnetic Resonance Laboratories (ICAMRL) for the accreditation of facilities.
Turf battles raise another important question, that of training in diagnostic imaging and how much is enough? As a result of imaging being performed outside of radiology, many nonradiology medical specialty societies have produced training standards for diagnostic imaging. From the perspective of radiology, tremendous variation exists among these standards, which are generally the product of a consensus panel, and range from being lenient to strict. The issue has relevance beyond turf, as MedPAC and commercial payers begin to look to these standards for reimbursement of physicians performing and reporting imaging studies. In addition, the ACR is building a program for physicians to be trained and designated as providers of medical imaging for reimbursement purposes.
Several studies have documented that board certified radiologists and radiology residents in training outperform nonradiologist physicians in interpreting standardized image sets. Moreover, within radiology itself, the accuracy of interpretation has been shown to correlate with the level of training of the interpreting radiologist. To this end, Wechsler and colleagues compared second- and third-year radiology resident overnight reads and cross-sectional imaging fellow reads versus final attending interpretations of emergency body CT scans. Discrepancy in readings was classified as major or minor. For the resident cases, the major discrepancy rate was 2.8% and the minor discrepancy rate was 10.6%, and for the fellows, these rates were 0.7% and 5.3%, respectively. Their obvious conclusion was that in film interpretation, radiologists in their fifth year of training by far exceeded those with only 2 or 3 years of training in performance. A study in ultrasound training tracked the sequential progress made by 10 first-year residents at the benchmarks of having performed their first 50, 100, 150, and 200 cases. The test consisted of performing and interpreting 10 cases, and a passing grade for each case required demonstration of 80% of required landmarks and making no clinically significant interpretation errors. After 200 cases, the mean number of clinically significant errors per case was 0.5 (range, 0.2–1.0) and the mean percentage of passed cases was 16% (range, 0%–50%). The clear conclusion was that evaluation of more than 200 cases is needed to achieve competence in the performance and interpretation of ultrasound. Based on these data, the ACR and the American Institute of Ultrasound in Medicine have both established minimum training requirements of at least 500 ultrasound examinations.
This example of cooperative effort in establishing rigorous standards should preserve the dimensions of quality and safety for patients by allowing turf access only to experienced operators. Unfortunately, in 2000, the American Medical Association House of Delegates passed policy H-230.960, placing ultrasound within the scope of practice of any “appropriately trained physician,” recommending that hospital staff credentialing be based on standards developed by the physician’s specialty. This description leaves the question of appropriate training purposefully vague, and neither protects turf nor the quality of care.
Lastly, in examining how turf battles impact training, studies evaluating the effect of training at higher levels of physician experience are considered. Second readings provided by two subspecialty radiologists reviewing outside CT scans on 143 patients with known cancer were compared with the initial readings provided by general radiologists. In 17% of cases, the subspecialty read differed significantly from the initial read, and included previously undiagnosed lymphadenopathy, pulmonary nodules, hepatic lesions, and extrahepatic masses. In a similar study conducted at four academic cancer centers, subspecialty second readings disagreed with initial reads in 41% of cases. Final diagnoses, established through surgery, biopsy, or 6-month clinical and imaging follow-up, confirmed the second subspecialty read to be correct in 92% of the discrepant cases, the initial read to be correct in 6%, and neither to be correct in 2%. Training and experience matters at all levels of training and practice, and clearly impacts quality.