Peer Review



Peer Review


Jeffrey P. Kanne





In 2000, the Institute of Medicine (IOM) published its report To Err Is Human, drawing attention to widespread errors in the practice of medicine.1 Subsequent increased regulatory and public scrutiny of physician performance lead to the practice of peer review as a method to monitor physician performance. Peer review became more integrated into the practice of medicine through requirements published by The Joint Commission (TJC), the primary accreditation body for hospitals. TJC published new guidelines in 2004 mandating collection and use of provider-specific performance data in the credentialing and recredentialing processes, including clinical judgment, technical skills, communication, professionalism, and continued education and improvement.2 In 2007, TJC revised these guidelines to emphasize the alignment of provider-specific data with the six core competencies developed jointly by the Accreditation Council for Graduate Medical Education (ACGME) and the American Board of Medical Subspecialties (ABMS).3 These core competencies include patient care, medical and clinical knowledge, practice-based learning and improvement, interpersonal and communication skills, professionalism, and system-based practice.

As with other medical specialties, radiology as a profession needed to develop a peer-review model that satisfied both public and regulatory pressures driving monitoring of physician performance. The American College of Radiology’s (ACR) RADPEER program, which was developed as direct response to the IOM’s To Err Is Human report and made available in 2005, became the stimulus for radiologists to become regularly engaged in peer review.4 RADPEER enables participating radiologists and practices to compile individual and practice peer-review data stratified by modality and practice site and compare with other national performance data. Currently, the ACR reports that more than 17,000 radiologists actively participate in the program.5 However, RADPEER, while popular among practicing radiologists, is only one part of the bigger peer-review process. Other methods of peer review include double reads, consensus-oriented group review, focus practice review, practice audit, and correlation with operative and pathologic findings (Table 11.1).

The central goal of peer review in radiology is to improve patient outcomes by reducing interpretative and procedural errors. Deciding how to institute a peer-review program can be challenging because each model of peer review has its own advantages and disadvantages. Radiologists may object to this new “intrusion” into their practices because of real or perceived bias among reviewers, unclear policies, lack or perceived lack of transparency, absence of evidence-based reference standards for many cases, additional time pressures on an already busy practice, belief that peer review will not lead to improved patient care, and even legal concerns.


MEASURING PERFORMANCE IN RADIOLOGY

To effectively measure radiologists’ performance, metrics must be relevant to individual radiologists, practice leadership, credentialing bodies, and society as a whole. Metrics should be readily reproducible and chosen on the basis of published evidence or established standards or guidelines and should be applicable to each radiologist’s respective practice.3 Furthermore, a sufficient number of data points should be obtained to ensure that data are meaningful (Figs. 11.1, 11.2, 11.3). For example, some authors recommend that 3% to 5% of interpreted examinations undergo peer review,6 arguing that reviewing only 0.1% of a radiologist’s reports may not fully demonstrate a radiologist’s true performance level. However, this recommendation is arbitrary and is not supported by any published scientific evidence.7

Diagnostic accuracy is the most appropriate and ostensibly the most important performance metric in diagnostic radiology because of its direct relationship to patient outcome.8 However, medical imaging is typically only one part of the diagnostic workup, and patient outcome is usually the result of many factors, including natural history of disease and patient’s response to therapy. Additionally, errors made on diagnostic imaging studies can have variable impact on patient management or ultimate outcome. For example, failing to detect a 2-cm colon carcinoma on an abdominal computed tomography (CT) scan will likely significantly adversely affect patient outcome whereas inaccurately characterizing the pattern of advanced fibrosing diffuse lung disease on a chest CT scan may have little impact on patient outcome.









Table 11.1 RESPECTIVE ADVANTAGES AND DISADVANTAGES OF PEER-REVIEW METHODS IN RADIOLOGY.































Method of Peer Review


Advantages


Disadvantages


Retrospective


Data easily accessible


Can be performed on-the-fly


Loss of anonymity


Potential for bias


Significant errors may not be identified in a timely manner


Practice audit


Uses reference standards


Encourages use of electronic health record and outcomes


Labor intensive


Limited scope of studies that can be reviewed


Validity of reference standard may be in question


Double read


Real-time peer review


Potential to promptly detect clinically significant errors


Labor and time intensive


Requires large number of studies to be meaningful


Structured feedback


Assesses entire radiology report


Encourages focus on communication


Solicits input from referring physicians


Labor intensive


May miss clinically significant findings when focus is too much on report verbiage


Focused practice review


Highly structured process


Robust appeals process


Directly involves departmental leadership


Labor intensive


May incite animosity from potential “punitive” nature


Comprehensive professional review


Evaluates all facets of professional practice in radiology


Aligned with regulatory requirements


Labor intensive


Obtaining data may be difficult


Can be more challenging for smaller practices


Adherence to agreed-upon practice guidelines is another performance indicator that can be measured for each radiologist. For example, a radiology practice can agree to use the Fleischner Society’s published guidelines for management of incidentally detected small lung nodules,9 and adherence to and appropriate use of these guideline can be measured for each radiologist as a component of peer review. Variability in recommendations for managing imaging findings can be confusing to referring physicians and patients.

Radiologist performance can also be evaluated by soliciting feedback from colleagues, trainees, staff, and patients. Specific attributes such as communication skills, professionalism, and “good citizenship” within a department can be assessed. A summary of personal evaluation and review of patient or staff complaints or commendations can be included as a component of a professional practice evaluation in addition to clinical skills.

For a peer-review process to be effective, it should be fair, transparent, consistent, and objective. Conclusions should be defensible, and various opinions should be included. Peer-review activities should be timely, result in useful action, and provide feedback through auditing6,10.






FIG. 11.1 • A sample report of peer-review reporting system showing the type of examination, score given, and reviewer’s comments.


METHODS OF PEER REVIEW IN RADIOLOGY


Retrospective Peer Review

Peer review in radiology is primarily conducted in a retrospective manner because digital archiving of diagnostic imaging tests and accompanying interpretative reports makes retrospective peer review easy to perform and available to nearly all practicing radiologists. In retrospective peer review, the radiologist interpreting the current study reviews a previously interpreted comparison examination of the same patient with its accompanying report and assigns a score that typically reflects agreement or various levels of disagreement. Some scoring systems also flag disagreements or subdivide scores into those that are clinically relevant and those that are not (Table 11.2).4 When a disagreement in interpretation occurs, the reviewer can provide feedback to the original interpreting radiologist through various processes. Examinations can be identified for peer review randomly or during review for multidisciplinary conferences or consultation.

Although relatively easy to implement and perform, the retrospective case review model of peer review has received the most scrutiny for its inherent limitations.6,11,12 One significant weakness is the lack or potential lack of randomness. For example, one radiologist may opt to review less complex studies or apparently normal studies to minimize time and energy spent on peer review. One approach to improving randomness of peer review is to assign for review the first case encountered with a relevant comparison on any given day. Additionally, specific numbers of peer reviews by modality or body part can be required. Optimally, integrating software applications into the radiologist workflow can reduce the time demands on radiologists as well as improve random sampling. Cases can be randomly selected for review from the picture archiving and communication system (PACS), radiology information system (RIS), or the voice recognition (VR)

software database and assigned to an appropriate radiologist for review. Progress of reviews can be tracked, and results can be recorded in a dedicated peer-review database (protected by local peer-review statutes, if applicable). Finally, notifications can be sent to radiologists whose cases were reviewed with the results of those reviews.

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

Oct 14, 2018 | Posted by in GENERAL RADIOLOGY | Comments Off on Peer Review

Full access? Get Clinical Tree

Get Clinical Tree app for offline access