Abstract
Errors are innate in every field of medicine despite best efforts. For the field of diagnostic radiology, the cause of errors is multifactorial and includes errors in cognition and perception, flaws related to technology and image acquisition, and errors related to input from technologists and ordering clinicians. Approximately 70% of radiologic errors are perceptual, and about 30% are cognitive errors. Careful research dating back to the mid-20th century has documented a baseline error rate (mostly perceptual errors) that represents errors that are not amenable to remediation. System errors also contribute, most importantly in the area of communication. Classification systems exist to help categorize these identified errors. Importantly, changes must be made to prevent future error. Models like the just culture model, which understands that human error is not to be punished, can be used to implement system-based change.
Keywords
Cognitive errors, errors, error classification, just culture model, perceptual errors, radiology
Introduction
Errors are innate in every field of medicine and persist despite the best efforts of medical professionals to be (or become) flawless. In fact, the prevalence of errors by radiologists (i.e., the radiology error rate) has been remarkably constant in repeated studies dating to the 1930s. For radiologists, there are multiple contributing factors for this, including knowledge gaps and perceptual errors; flaws inherent in emerging technologies for image acquisition and reliance on the input of other professionals, especially technologists for image acquisition; and referring clinicians, who are relied upon for requesting the most suitable study and providing appropriate historical guidance. The underlying cause(s) of the most common radiologist error—simply failing to perceive a finding that sometime later seems obvious in retrospect—remains unknown. In the face of this harsh reality, it is crucial to create systems to attempt to reduce and prevent error but also to rapidly detect errors when they inevitably occur so that the appropriate remediation and prevention of harm can be accomplished. In approaching the subject of radiologist error, a classification of errors is needed.
Epidemiology of Error
Leo Henry Garland was a radiologist in the mid-20th century. He was the pioneer in studying radiologic error and published an important article on diagnostic error in the American Journal of Roentgenology in 1939. Garland discovered a 30% miss rate when experienced radiologists interpreted positive chest radiographs and found that 2% of negative chest radiographs were overinterpreted. He also found that 20% of radiologists disagreed with themselves on a second reading! His work was not warmly embraced by his colleagues at the time, who were at best hesitant to acknowledge the significance of his results. Since Garland’s time, more recent research on radiologist’s error rates have repeatedly yielded results consistent with Dr. Garland’s initial reporting of error frequency, without any appreciable improvement despite advances in radiologic technology and clinical knowledge. This suggests that the problem is a very basic one, not readily amenable to technological or educational intervention.
In multiple peer-reviewed publications (see Siegle et al.) the most realistic estimates for radiologist error rates are obtained when radiologist’s performance is measured using case samples typical of actual practice, where the mix of studies interpreted includes a blend of normal and abnormal cases with a representative disease prevalence. Studies of this type have generally revealed an error rate of 3.5% to 4%. If the case mix is enriched to include essentially 100% positive studies—a very artificial situation—the error rate rises to approximately 30%. Because most radiologists interpret a mix of studies with a low overall prevalence of positive findings, and because most radiologists interpret well in excess of 100 studies in a typical day, this translates to approximately three or four errors per day per radiologist, on average. Fortunately, only a small fraction of these errors result in patient harm, but it is worrisome that most go undetected; in most cases, radiologists are never made aware of the majority of their errors and receive useful, prompt feedback on an extremely small fraction of their actual errors.
In September of 2015, the Institute of Medicine (IOM) released a new report on the prevalence and range of diagnostic error, entitled Improving Diagnosis in Healthcare . This new IOM monograph, which is part of a series including Crossing the Quality Chasm , which was published in 2001, and To Err Is Human: Building a Safer Health System , published in 2000, once again called attention to the alarming magnitude of the problem, namely, the very large number of diagnostic errors in medicine. The report defined diagnostic error as “the failure to (a) establish an accurate and timely explanation of the patient’s health problem(s) or (b) communicate that explanation to the patient.” Based on that definition, the committee concluded that diagnostic error is so common that most people will experience at least one diagnostic error in their lifetime, sometimes with devastating consequences, and called for urgent change to address this challenge. Postmortem studies spanning many decades estimated that diagnostic errors contributed to approximately 10% of patient deaths, and review of malpractice claims data has shown that diagnostic errors are the most common cause of successful malpractice lawsuits.
Of course, radiologists’ errors account for only a fraction of overall diagnostic error because imaging is only one of many factors that contribute to the diagnostic process. Other contributing factors include faulty information gathering by clinicians, insufficient consideration given to differential possibilities, inadequate performance of a physical examination or misinterpretation of physical exam findings and laboratory tests, lab and pathology errors, failure to act appropriately on the results of monitoring or testing once the results are reported, inadequate communication between caregivers, and so on. But because the specialty of diagnostic radiology exists primarily to reduce the uncertainty in establishing a diagnosis, the report carries significant implications for radiologists.
Types of Errors
Various authors have attempted to provide classification systems for radiologist’s diagnostic errors. The very comprehensive system proposed by Kim and Mansfield (discussed later in this chapter) includes 12 categories of error, although some error types were shown in their study to be much more prevalent than others. In this chapter we limit our discussion to diagnostic errors and do not consider the related issues of treatment, procedural, or medication errors, which are also common causes of patient harm. Often overlooked are errors related to overdiagnosis , in which radiologists contribute to placing a pathologic disease diagnosis on a normal, healthy patient, leading physicians to provide inappropriate (i.e., not indicated) care, such as when a normal appendix is removed because a radiologist misinterprets a normal structure for an inflamed appendix or laparoscopy is done for a presumed ectopic pregnancy due to misinterpretation of an ultrasound artifact. Other types of errors germane to radiologic practice (but not limited to radiology) include communication failures, equipment malfunction, or other system failures.
It is important to realize that, contrary to the prevailing culture within the medical profession, the vast majority of medical errors are not simply due to individuals being reckless, ignorant, incompetent, sloppy, or negligent. To the contrary, research has repeatedly demonstrated that nearly all medical errors are instead due to preventable failures in the systems, processes, and conditions in which caring, competent professionals practice.
Definitions and Classifications of Error
In classifying errors, a few definitions are helpful.
An active error is one that can be attributed to specific human failures, equipment malfunction, or external failures. James Reason describes active errors as those that occur at the human interface with the larger healthcare system, often referred to as the “sharp end.” They include procedural mistakes, diagnostic errors, and misinterpretations of test results. Active errors are more easily identifiable because they result from the consequences of the actions of an identifiable individual. In contrast, latent errors (or latent conditions) are usually not attributable to a single person but rather are the result of failures inherent in the system design, resulting in inadvertent (but often preventable) harm to a patient. These errors are said to occur at the “blunt end” of the system, and their causes are not as easily identified. The two often occur together; thus, an active error often occurs simultaneously with a latent error.
An adverse event is any injury caused by medical care and not by the patient’s medical condition. Examples include pneumothorax post central venous catheter placement, anaphylaxis to penicillin, or development of deep vein thrombosis during hospital stay. Adverse events do not imply error or fault. The occurrence is defined as secondary to some aspect of diagnosis or therapy, not from an underlying disease process.
A close call or near miss refers to a mistake that did not lead to a negative outcome for a patient, but only by virtue of luck or chance.
The Joint Commission defines a sentinel event as “an unexpected occurrence involving death or serious physical or psychological injury, or the risk thereof.” Serious injury in this context particularly refers to loss of limb or function. These events are called sentinel because they indicate that a serious systems issue may be present requiring immediate investigation and response.
Although not a specific error type, the presence of an authority gradient is a known risk factor for error in medicine. It was originally described in the setting of aviation when copilots had a difficult time communicating errors to pilots in time to prevent harm (such as an airplane crash), due to the copilot’s feeling of hierarchical inferiority. Similarly, in medicine, there are many hierarchal levels in a medical team (e.g., doctor and nurse, attending and resident, and doctor and pharmacist) in which this authority gradient is applicable. This is a known significant cause of medical error and a well-documented cause of aviation errors that have led to significant loss of life in airline crashes.
Radiologic Error
Interpretation of imaging studies is performed by humans and thus involves all aspects of human error, especially error related to psychological and cognitive processes that are poorly understood. Perceptual errors involve both not seeing a finding and not becoming sufficiently aware of it to trigger an action or diagnosis on the part of a radiologist. Some of these are believed to actually represent errors in working memory, such as when a finding is annotated on an image but never mentioned in the final report and not taken into account when forming the differential diagnosis. Such findings are often readily apparent in retrospect to the person who made the perceptual error. Cognitive error , in contrast, arises when an abnormality is correctly identified visually, but the diagnostic implication or importance of the finding is not correctly understood and thus not relayed effectively in the report. The prevalence of perceptual errors is much larger than cognitive errors. Approximately 70% of radiologic errors are believed to be perceptual, and about 30% are believed to be some sort of cognitive error.
It is important to understand that a radiologist making such an error does not imply that the radiologist is being negligent and should be punished; careful research from the time of Garland onward has documented a baseline error rate (mostly perceptual errors), which represents errors that are not amenable to remediation by extra education or by having a solid work ethic. Knowing this basic fact, it is crucial to create a strong institutional infrastructure to identify radiologic errors as expeditiously as possible and to find ways to prevent errors and to detect them soon enough to avoid patient harm.
System errors are those to which individuals are predisposed due to flaws in the systems or processes in which they work. Most communication errors fall under this category, including lack of communication when it would have been needed or simple miscommunication, when the receiver does not register the intended message of the sender. In the teaching hospital setting, an example of a communication error may include an attending or resident radiologist not effectively communicating a change made to a preliminary radiology report to the appropriate clinician(s) in time for the new information to meaningfully change patient management.