Error is an inevitable consequence of our human condition. Error also occurs as a result of the level of uncertainty that exists in radiologic practice. Although error cannot be entirely eliminated from the practice of radiology, the overall incidence of error can be reduced and the consequences of our errors can be managed. In this chapter we discuss a variety of strategies for understanding, detecting, reducing, and managing the consequences of error. The role of standardization, the use of checklists and forcing functions, and human factors engineering are discussed, as well as established methods for increasing systems resiliency, improving interprovider communication, and managing so-called sentinel events.
KeywordsChecklists, communication, ergonomics, errors, forcing functions, human factors, resiliency, sentinel events, standardization, trigger functions
It is indeed human to err, yet unrealistic expectations for perfection persist throughout the practice of medicine. For radiology, the challenge has increased substantially in recent years due to rapidly advancing imaging technologies, increasing volumes of studies, each with massively increased volumes of images per study, as well as poor overall communication of needed clinical information from referring physicians, all of which contribute to an increased risk for radiologist error. If errors cannot be eliminated, then we must develop systems and procedures to reduce and manage them. If radiologists and their professional societies do not meet this challenge, governing bodies such as The Joint Commission (JC) will attempt to externally enforce the development of comprehensive error detection systems and procedures for active error prevention and remediation strategies.
In this chapter we discuss the various human factors involved in error occurrence in radiology, along with recommended strategies to address these factors with the aim of improving error detection, prevention, and finding the most effective ways to manage errors when they (inevitably) occur. We also focus on communication in radiology and strategies where improving the effectiveness of radiologic communication can lead to error reduction. The concept of a sentinel event response aimed at prevention of potential future harm is presented. We also wish to examine the types of occurrences that constitute an error in radiology and what methods exist that can be applied on a continuous basis to detect and prevent radiologic error in a rigorous fashion, including the role of (now mandated) peer review.
Human Factors Engineering
In a healthcare delivery system there are numerous daily interactions among workers, equipment, and environment, which ultimately lead to positive or negative outcomes in patient care. Human factors engineering is a relatively new but growing discipline in multiple fields, including healthcare. This discipline focuses on complex systems analysis and understanding of how a complex system, including people and machines, works in actual practice. The aim is to then design or optimize equipment and human-machine interfaces, with the human users’ strengths and limitations in mind, to increase safety and minimize the risk of error in complex environments. This is done by dissecting complex activities or processes, breaking them down into smaller component tasks, and then assessing the individual demands of the operator at each stage, including things such as physical demands, skill demands, mental workload, team dynamics, and environmental adaptations (e.g., lighting, noise, distractors, ergonomics). This discipline attempts to pair human strengths and limitations in the performance of each task with the core design of the equipment and physical environment in which the task is performed.
Usability testing refers to use of equipment and systems by trained users under real-world conditions to identify unintended flaws in these new technologies before they reach the user market. An example is a recent study that found an unexpected increase in patient mortality in a pediatric intensive care unit as a result of the use of a computerized order entry system (CPOE). Upon close evaluation, it was found that time demands resulting from a cumbersome order entry process were drawing clinicians away from the bedside, leading them to overlook signs of distress in their patients. Usability testing was later applied with simulated clinical scenarios and showed severe limitations of the installed CPOE.
Usability testing, as it applies to radiology, means testing the interaction between equipment and the user to find the best-suited equipment/technology or system to maximize ease and intuitive use to optimize radiologist performance. Usability in this sense is defined as the effectiveness, efficacy, and satisfaction with which a radiologist can interact with a system. In other words, functionality determines usability .
More simply put, usability testing answers the question: “How user-friendly is this system in achieving its desired purpose?”
Usability testing also impacts equipment design. Usability problems can arise not only when there is poor design of equipment but also when there are poor instructions for its use. One published example (see the Suggested Readings ) is an implantable inferior vena cava (IVC) filter system that can be inserted by a femoral or brachial approach. Depending on the approach, the filter unit must be attached to the introducer sheath in a particular orientation for correct placement, and incorrect attachment of the filter (e.g., using the brachial attachment for a femoral approach procedure) can result in an incorrect orientation of the filter within the patient’s IVC and thus lead to potential for harm by dislodgement or nonfunction. The authors point to the need for very clear documentation of how the system must be used and further suggest that usability problems in the future may be detected by use of a shared online database of users, as long as there is frequent database review by these users.
Workarounds are perhaps the most common class of methods in use to accomplish an activity when existing methods are cumbersome or are not working well. This involves both single instances and situations where practitioners consistently bypass established policies and safety procedures, increasing the risk of errors and patient harm. Although the goal is generally to get work done more efficiently, workarounds can be dangerous and illustrate the need for good system design. Flawed and poorly designed systems that force workers to spend an excessive amount of time or effort to complete a task when all safety steps are followed precisely lead workers to cut corners and find alternatives of varying effectiveness and safety. The identification of workarounds within a system can be a signal to leaders that a faulty process is in place.
Workarounds are often created spontaneously and used with good intentions by skilled staff to promote patient comfort or speed up medical interventions in an emergency situation. When cumbersome systems exist, healthcare workers may quickly determine that it is not always practical (or safe) in an emergency situation to follow all steps and comply with the prescribed process; these workers will generally find ways to circumvent the system to accomplish tasks more efficiently. There is substantial risk, however, of unintended downstream consequences: no matter how carefully done, and even with the best intentions, the use of workarounds promotes error and compromises patient safety, partly by overriding safety features that are built into systems. This is especially true in emergent situations with high tension when human attention to detail may be suboptimal.
In most instances, workarounds do not result in patient harm, and the increased efficiency of their use creates a type of reward for the creator of the workaround; there is the positive feedback of enhanced efficiency or ease, which reinforces the workaround’s use, leading to complacency, which only serves to enhance their latent risks. An effective workaround can also serve to prevent the underlying system problems from being fully recognized and addressed.
Workarounds, however, can have positive effects. They can be viewed as a trigger warning for the existence of an underling system failure that requires attention and resolution. Analyzing a workaround process may lead to definitive solutions for more global issues within a poorly functioning or cumbersome system. It can identify problems with existing technologies or uncover unnecessarily complex processes. For example, it was discovered at a Veterans Affairs hospital that the staff was forced to use informal patient identification processes because barcodes on patient armbands were not water resistant and easily washed off. This example demonstrates how a significant patient identification error could easily occur as the result of working around an ineffective, unusable system. Addressing the underlying system failure, particularly by applying usability testing for the armbands, would resolve both the system failure and obviate the incentives for a potentially dangerous workaround.
In other words, a workaround process is most often an answer to the question, “How can I make this cumbersome, time consuming, or complicated system more user-friendly so that I can do my job more efficiently?” Usability testing can be employed in a scenario where the use of workarounds is identified. For example, when a new piece of equipment or a new system is introduced into the work environment, there perhaps ought to be a beta testing period where the new system is used by a limited number of staff to identify potential usability nuisances and provide feedback to the institution before the equipment or system is introduced to the larger group.
A forcing function is an aspect of system design that prevents an undesirable function from being performed or allows its performance only after another function is performed first, such as when a prefunction is needed to make the main intended function safe. A simple example of a forcing function in commonly used technology is how a microwave oven is designed not to function while the door is open. This forces the user to close the microwave door first. By applying principles of human factors engineering in error management and prevention, the design of forcing functions represents an attempt to anticipate the types of error that may occur and incorporate a function or failsafe directly into the design of products and processes that may prevent the occurrence of that error. (One must be careful not to create a function that only serves to force a workaround, however.)
Forcing functions are often embedded in medical equipment, from small syringes to complex magnetic resonance imaging (MRI) scanning machines. Although some make this equipment more cumbersome to use and require specialized training, this type of human factors engineering works to ensure patient safety as long as the forcing function mechanism is intact. One such example is the use of a Luer-Lok system for syringes and indwelling lines, which must be matched to catheters before an infusion is possible. Another common example is how the connectors for anesthetic and oxygen gas lines are incompatible with each other, so it is not possible to inadvertently misconnect them.
An example of a forcing function related to radiology equipment is the use of automatic exposure control (AEC) in both film screen and digital radiography. The purpose of AEC is to limit patient radiation dose and still generate the most optimal image by controlling exposure time. An AEC system uses radiation detectors in the form of ionization chambers (usually three to a system), which are calibrated based on phantoms and positioned in a specific orientation. AEC reduces radiation exposure by controlling the total milliampere second (mAs) output of the x-ray tube.
Another example of a forcing function, designed specifically for use in procedural environments, is the use of a preprocedural time-out . Instituting this practice forces performers to stop what they are doing and focus on the pertinent details of the case about to start, including having the correct patient, performing the requested procedure, and confirming optimized laboratory studies and presence of any possible limitations such as drug allergies. This also focuses the attention of multiple attendees present in the room for the case and enables someone to speak up if the presented information is incorrect to his or her knowledge before the case is started. This universal protocol is a requirement for all hospital procedures today.
Simply put, a forcing function is similar to a constraint because a constraint makes it more difficult to do the wrong thing; a forcing function at least theoretically makes it impossible to do the wrong thing. Employing the concept of the forcing function answers the following questions: How do I make this system impossible to mess up? What steps can I anticipate and pre-perform or take out of the hands of the operator? How can I make my system’s safety more operator independent?
Human factors engineering was responsible for bringing about many of the equipment and processes standards that we enjoy today in various medical domains. A key concept is that equipment and processes should be standardized whenever possible to increase reliability, improve information flow, minimize cross-training needs, and prevent operator confusion. Consistency with equipment function, for example, prevents potential for error due to alterations from the usual or expected workflow. It also allows staff to alternate between sites without learning how to use different equipment.
Although standardization of equipment is optimal, it may or may not be possible in a large institution, which may have equipment that differs by age and by manufacturer due to economic considerations such as contracting variability over time. It is, however, always possible to strive for standardization of processes. Constantly following the same steps over time minimizes variation and has been shown to improve both efficacy and safety. Standardizing processes ensures that the same processes are followed by different staff working in a rotation or in a changing environment. It builds resiliency into a system. More importantly, it allows for detection of variation from the norm more easily, which can trigger closer analysis and error prevention. The use of checklists and templates can help to promote standardization in radiology.
Checklists ensure consistency of procedural steps and communication. They allow for a concrete listing of required information so that all parties involved in a procedure are completely aware of the circumstances and can speak up if something seems off. They also allow for organized and readily available lists of proceedings in an emergency or high-risk situation. Some groups even advocate the printing of resuscitation or contrast reaction steps on cards with lanyards that are worn by primary staff, so this information is readily available when needed.
In surgery, institution of checklists prior to operative procedures has been shown to reduce mortality from 1.5% to 0.8% and complications from 11% to 7% (see Suggested Readings ). Radiology checklists are useful in both diagnostic and procedural settings. In diagnostic radiology, TJC mandates the use of magnetic resonance (MR) and computed tomography (CT) screening questionnaires. These questionnaires are designed to detect potential safety issues prior to scan administration so that they can be corrected or scanning prevented where potential harm may outweigh the benefits of the scan. Potential safety issues screened by these questionnaires include contrast material allergies, pregnancy and breast-feeding status, renal function, medication interactions and allergies, intravenous access, and presence of a cardiac pacemaker or aneurysm clips. In procedure-based radiology practices, such as interventional services, drainage services, aspiration, and injection services, checklists are used in the form of time-outs with an actual pause prior to the procedure and active participation in the surgery safety checklist by all team members, which is also mandated by TJC. This checklist includes a review of patients’ identification, allergies, correct procedure, site of procedure, labs, collection of specimens, and medications that must be discontinued and those that must be administered.
The use of report templates is a form of standardization of the radiology report and has generated much controversy in recent years among radiologists but is gaining traction throughout the country. Although TJC recommends use of templates for reporting, it is not yet mandated, except in reporting of mammography results. Reports written using templates have been shown to have more clarity and consistency of content, leading to improved clinician understanding. For example, in one study evaluating the introduction of templated radiology reporting for presurgical staging of pancreatic cancer, surgeons reported an increase in presence of all information needed for surgical planning in radiology reports (an increase from 69% to 98% in structured reporting, compared to an increase from 25% to 43% in unstructured reporting, which was used as a comparison standard). Currently, several radiologic societies, including the Radiological Society of North America (RSNA) and the American College of Radiology (ACR), offer sample report templates in their Breast Imaging Reporting and Data System and Liver Imaging Reporting and Data System in an effort to standardize the reporting process across the nation. However, each radiology practice remains free to implement their own versions of templates as appropriate for their own specific practices.
Some authors in radiology have suggested that a checklist approach to imaging interpretation ought to decrease error rates in radiology. The use of a checklist function embedded in a template might well help to prevent error due to human perception where a finding is simply missed, by reminding the radiologist to look for it to fill in a blank in the template. A radiology checklist in this scenario might include common diagnoses and misdiagnoses typically seen on a specific body part or condition so that these are always checked prior to finalizing a radiology report.
Resiliency effort is an aspect of human factors engineering that focuses on risk management, anticipation of error, and recovery from error. Resilience, as the word might suggest, defines the adaptive capability of a system and its ability to modify itself and continue functioning, by modifying risk both prior to and after a safety event occurs.
In a standard model of risk management, an event occurs, is detected, is analyzed, and then corrective functions are implemented to prevent further such events. In a resilient system, an impending safety error is detected before patient harm occurs. Anticipation, early detection, and response to error are a dynamic process in this kind of system. Resilience, therefore, is an intrinsic ability of a system to adjust its functions prior to, during, and following disturbances, so it can sustain operation under both expected and unexpected conditions. According to Hollnagel et al., there are four essential capabilities of a resilient system:
Knowing what to do : being able to respond to regular and irregular variability and disturbances by either adjusting the way things are done or activating ready-made responses.
Knowing what to look for : being able to monitor what changes or what may change in the near future so much that it will require a response.
Knowing what to expect : being able to anticipate potential disruptions or changing conditions in the future.
Knowing what has happened : being able to learn from experience.
An example of a resilient system in radiology is the early detection of high radiation exposure from a CT scanner and either repairing the equipment or revising CT protocols to lower radiation exposure before any patient injury occurs.
To create resiliency in a system there must be consistent, open, and nonthreatening reporting of safety data. Barriers to reporting safety issues hinder the ability of an organization to achieve resilience. The following have been recognized as some of the common barriers to reporting: an individual’s hesitance to admit failure, fear of punishment or retribution, failure to recognize the importance of reporting anomalies that do not result in actual patient harm, and existence of authority gradients. In addition, it is essential to create a safe culture of reporting where the person who identifies an error or a potential source of error feels comfortable and empowered to voice concerns without fear of retaliation. All high-reliability organizations feature such an open safety culture , and all display resiliency.
With the ever-increasing volume of radiologic studies performed and read, the increasing demands for production of relative value units (RVUs) by radiologists, and pressure for rapid report turnaround times, physical stresses on radiologists become a significant factor. The science of ergonomics has therefore recently come to play a major role in the workplace and workflow design for radiologists.
One of the most obvious ergonomic features of a radiology practice is the use of low levels of ambient light in the reading room to optimize visual comfort and performance. The use of office-style lighting or complete lack of light, except for light from the monitor, has been shown to reduce diagnostic efficacy. Inappropriate balance between ambient room light and monitor lighting has been shown to contribute to radiologist fatigue and decreases both efficacy and accuracy. Kruskal et al. have recommended that the reading room light is kept to low levels and that monitor light is kept at 25 foot-lamberts or more. In addition, to address fatigue and eye strain and for optimal visualization, a combination of indirect overhead lighting and local task lighting should ideally be used. Moveable partitions should be installed between workstations, and walls should be made dark to avoid reflection. In addition to screen resolution and lighting, effects of eye strain are also related to screen flicker and glare, working distance and angles, and decreased blink rate due to observing a monitor. Symptoms include irritation and eye pain, blurry or double vision and headache, and fatigue, burnout, increased perceptual error, and decreased reaction time. To help reduce eye strain, one should optimize the lighting in the work environment as discussed and sit at an appropriate distance and angle from the monitor, read for less than 7 hours a day, and take at least one break per hour.
In addition to eye strain, musculoskeletal ergonomics is considered for optimal reading performance. Musculoskeletal complaints include general fatigue, neck and back pain, carpal tunnel syndrome, and cubical tunnel syndrome. Carpal tunnel syndrome occurs more often among radiologists than among other nonradiology physicians and other computer-related fields such as office staff. Carpal tunnel syndrome is associated with dorsiflexion of the wrist and ulnar deviation of the hand. Even the constant spinning of the mouse wheel, commonly done in radiology reading rooms, has been shown to produce increased rates of carpal tunnel syndrome.
Communication is an essential and integral part of the practice of radiology because interpretation of radiologic findings is incomplete without a structured way of presenting the findings to the requesting parties. An effective communication according to the ACR should strive to support the ordering clinician in providing optimal patient care, be timely, and minimize the risk of error. Communication in radiology takes on a variety of forms. Standard communication involves the creation and delivery of the written radiology report.
The radiology report is the final product of the department of radiology and a primary method of communication with the requesting party and the patient. The analysis of many lawsuits filed against radiologists, some successful, led the ACR in 1991 to publish the first set of Standard Guidelines for Communication. There have since been multiple revisions (the most recent in 2014) that attempt to establish minimal standards for the content and structure of the radiology report. The ACR suggests that radiologists’ written reports should include all of the following: demographics, including facility and location, patient identifiers, examination parameters (type of exam, date, time, and date and time of dictation), and the name of the requesting party; relevant clinical information; the procedures and materials used (contrast, other medications, radiation dose); findings; and potential limitations (such as sensitivity and specificity of the examination). The ACR also mandates that the report answer the clinical question and provide comparison to prior studies that are available. Finally, the radiology report should contain a final impression—a clinical interpretation of the findings and its potential implications, diagnosis or differential diagnosis, and suggestions for further management steps. This section should also report on any pertinent adverse reactions if they occurred.
Many methods have been suggested to improve the clarity of reporting. The most prominent has been the suggestion for institutionalizing the use of reporting templates. This practice promotes standardization of reports with respect to content, allows residents in training to know what content is important and not to be overlooked, helps practicing radiologists not to mistakenly omit important sections of the report, and allows clinicians who become familiar with the report structure to quickly identify the pertinent sections of the report to find the information they need.
As noted earlier, structured reporting is not yet the standard way of reporting even complex studies in most radiology departments and practices and remains a controversial topic among radiologists. Most clinicians and radiologists confirm their preference for structured reporting. One study, by Bosmans et al., found that 84.5% (592 of 701) of clinicians and 65.7% (88 of 134) of radiologists would opt for itemized reporting. Moreover, two-thirds of the radiologists and the referring clinicians would favor the use of a standardized lexicon for radiologic reporting. Since the report of the COVER and ROVER surveys in 2011, more and more movement toward some sort of standard reporting, at least within an institution, has been made, and residents are being trained in standardizing report organization and lexicon in each critical section.
The American Journal of Roentgenology (AJR) regularly addresses improvement in report clarity in their column on language at the end of every issue. For example, the March 2016 issue suggests clearly stating “no change” in the impression of the report if that is what the findings show. Other statements such as “no significant change, no interval change, stable” can mean different things to different specialty clinicians and can baffle some patients who increasingly read the radiology reports via patient online portals. The lexicon suggested by AJR is to report, “There is no change given differences in imaging technique” as a standard lexicon of reporting. Other authors have suggested putting a high premium on proofreading and editing every report before publishing for the reader so that such slips as “The liver is normal with metastatic disease,” where the report author may have meant “without metastatic disease,” do not take place! There also have been warnings regarding the errors typical of voice-recognition transcription, especially when using prefixes that can be misunderstood by such systems; for example, “aseptic” may be mistranscribed as “a septic.” Such a simple, small error, if it leads to a clinical misunderstanding, can potentially cause severe consequences for patients.
Critical Results Reporting
Routine reporting of imaging findings is communicated through the usual channels established by the institution for such communication, such as a written radiology report published in a patient’s chart, through the imaging interface, or printed and inserted in the medical record manually. However, in emergent or atypical situations, the radiologist should make an effort to deliver imaging findings in a more expedited manner. Several successful lawsuits have been won against radiologists when worrisome findings on an imaging report were not communicated to the clinician in such an atypical situation, and as a result, adverse patient events ensued. In such cases, radiologists were held responsible for patient harm caused by the resulting delay in treatment. Courts have consistently opined that the burden of delivery of critical results rests with the interpreting radiologist.
The ACR, therefore, has made recommendations for critical results reporting, noting that such results should be reported directly to the ordering party or to the patient if the ordering clinician cannot be reached. Such communication should take place by phone or in person. These methods guarantee the receipt of communication and that the message is understood. Such confirmation of receipt of the message is an essential element in critical results reporting.
According to the ACR guidelines, there are several situations that require this sort of nonroutine communication of critical results directly to the clinician in addition to the standard written radiology report. These situations include:
findings which require immediate intervention, findings that are discrepant with a preceding interpretation of the same examination and where failure to act may adversely affect patient health, and findings that the interpreting physician reasonably believes may be seriously adverse to the patient’s health and may not require immediate attention but, if not acted on, may worsen over time and possibly result in an adverse patient outcome.
Other authors further assign levels of importance based on urgency for communication of findings that require intervention. In one system, a level 1 finding is one that requires immediate, urgent intervention (e.g., tension pneumothorax, leaking aortic aneurysm, cerebral hemorrhage, pulmonary aneurysm). A level 2 finding is one that requires urgent intervention (i.e., within 2–3 days) to avert patient morbidity or mortality (e.g., intraabdominal abscess or impending pathologic fracture). A level 3 finding is something that is new or unexpected and that can lead to patient harm if not acted upon but is not immediately life threatening (e.g., new lung nodule or solid renal mass).
TJC first published their National Patient Safety Goals (NPSG) in 2002. These are renewed yearly, and compliance with them is now a requirement for hospitals to maintain TJC accreditation. One NPSG requires that communication of critical results be done in a timely manner with acknowledgment of the communication by the receiving party. However, TJC does not define any specific mandate or consensus for an appropriate method of communicating these results or any specific timeline for doing so. The implementation of these communications, therefore, is determined by the particular institution or practice, although telephone contact is currently the most commonly used method. Once the communication has occurred, proper documentation of the communication event is necessary. Usually this is inserted directly into the written radiology report. Four elements must generally be included: name, time, date, and write-down-read-back. For the name identifier, the use of first and last name is recommended, but first name and job description are also acceptable (e.g., Nurse Betty of the South ICU night team). The use of first name and general descriptor alone (Nurse Betty) is not sufficient. It is best if the receiving party writes down the communicated result, but many times this may be impractical. Read-back of the critical finding by the recipient party is perhaps the best way to determine whether the result was appropriately understood. Some even suggest the communicating party should then ask the receiver for a follow-up plan, particularly if speaking to the treating clinician. For example, one might state, “Findings are highly suspicious for ovarian torsion. Is there a plan to take this patient to the operating room?” The perceived response to this type of follow-up question may give the communicating party a better perspective on whether the implications of the communicated critical result were perceived and understood by the receiving party.
It is also recommended to perform a periodic check for compliance with critical results communications policies. One can initially settle for about a 90% compliance rate but should strive to slowly raise the rate to 100%. Data on critical results compliance should be collected, analyzed, and reported by departmental quality committees.
In addition to this, some radiology report communications, specifically those of mammography reports, are heavily regulated. For example, the US Food and Drug Administration requires communication of mammography findings to the referring provider and directly to the patient, through the Mammography Quality Standards Act (MQSA) of 1994. In addition to providing the report to the ordering clinician through the usual channels, a report in lay terms must be provided to the patient within 30 days of the examination, no matter what the findings are. In the case of critical mammography findings (Breast Imaging Reporting and Data System [BI-RADS] 4 and 5 lesions), an immediate and direct communication to the ordering clinician must be made. The patient will still receive a report summary within 30 days of the examination. In case the patient is self-referred, the breast imaging center must refer the patient with abnormal findings to a healthcare provider able and willing to provide further care to the patient.
Effective communication between patients and caregivers has been shown to decrease medical costs, increase patient satisfaction, and have a positive effect on health management outcomes, whereas at least 30% of patient dissatisfaction with their care is related to incomplete, abrasive, or ineffective communication. Although most radiology communication takes place in the form of a written radiology report, sometimes it is necessary to communicate urgent findings verbally, via phone or in person, as discussed earlier, with proper documentation of such communication then appearing in the radiology report. There is also a place for informal verbal communication in the form of a curbside consult or wet read , which may occur in clinical interdisciplinary conferences, when radiologists provide a preliminary verbal interpretation prior to publication of an official report, provide informal verbal interpretation of outside or prior studies, or provide on-the-spot interpretations at the time of imaging (such as in the trauma bay during level 1 trauma cases). These formal or informal verbal communications often occur without benefit of later written documentation. As such, informal communications of this type contain inherent risks that the information shared may be misunderstood or misremembered. It is encouraged that radiologists who provide these types of informal communications document what was discussed and recommended from their radiologists’ perspective contemporaneously. A system for formal interpretation of outside studies as opposed to curbside consults is also encouraged to minimize misunderstandings, which may easily arise from informal, unscheduled interactions.
Obtaining a patient’s informed consent for a procedure is a specialized type of communication. Many physicians are familiar with the process of informed consent. In clinical practice and in research, this practice is widespread, providing documentation of patients’ or subjects’ understanding and agreement to the proposed intervention or study. In radiology, consent is typically obtained before administration of intravenous contrast, before interventional procedures, and before administration of sedation. However, there has been some ongoing discussion among radiologists in recent years that consent also needs to take place regarding medical radiation exposure in noninterventional imaging as well.
The concept of informed consent refers to the actual discussion between healthcare provider and patient. This process should provide complete information to allow the patient to understand the implications of the decision and to allow the patient to make the best, most well-informed decision. This is, however, not always feasible in practice due to study volumes and the time required to allow for such sufficient and comprehensive discussion regarding each examination. There was a joint proposal published by the RSNA and by members of the International Atomic Energy Agency and select European imaging specialists that an informed-consent discussion ought to be provided to patients for studies likely to expose patients to at least 1 mSv of ionizing radiation or more. Included in the discussion should be risks of radiation exposure, a risk-benefit analysis for performing the examination, and discussion of alternative means of imaging without the use of ionizing radiation, such as ultrasound or MRI. This proposal specified that only higher-dose studies be discussed at this time because the sheer volume of imaging studies with much less radiation exposure and much less overall risk would overwhelm the system if such discussion were mandated with every such minor study. The authors suggested that it would be ideal for radiologists to take an active role in initiating these discussions with patients, rather than await the imposition of governmental regulations to do so.
Penn State Radiology’s “Failsafe” Program
In recent years, there has been a substantial increase in the use of imaging in the emergency department (ED) setting. This has resulted in radiologists detecting a large number of incidental findings that are unrelated to the patient’s acute ED visit but that require follow-up and evaluation. Typically, such follow-up is not provided via the ED, and patients often fail to seek follow-up for these findings (if they become aware of them) after their discharge from the ED. Our radiology practice typically provides a written report detailing such findings only to the requesting physician, in this case an emergency physician, and not to the patient’s primary care physician (PCP) of record, who often cannot be identified at the time of the ED visit.
This common scenario creates a unique type of communication challenge for radiologists and is a significant potential source of a communication error. Patients may be lulled into false complacency that no problem exists because they believe the ED would have taken care of whatever their workup revealed. At the Penn State Milton S. Hershey Medical Center, we have developed our own stopgap communication method in recognition of the problem, in which we notify patients directly by mail (and increasingly, by phone) that there were findings from their ED studies that require nonurgent follow-up (as urgent findings are directly communicated to the ED and handled during the patient’s visit).
Our Failsafe Letter ( Fig. 15.1 ) was developed in concert with a team of stakeholders from the Departments of Radiology, Emergency Medicine, Family and Community Medicine, the office of General Counsel for Penn State, and the Chief Quality and Safety Officer. Each letter is personally signed by a radiologist; it is identified as a communication from the radiology department and notifies the patient that a finding was made on a radiology study. It also states that the patient’s own physician of record, and not the radiologist or the emergency physician, is the provider who can best decide with him or her what the appropriate follow-up for this finding ought to be. Each letter is addressed to a specific patient and mailed to his or her address of record, and the radiologists’ signature is original for each letter; the letter content is not otherwise customized, and the letter does not inform the patient what the finding of concern is nor what should be done about it. The Failsafe Program has been very enthusiastically received by our clinical colleagues here in Hershey because it increases safety without compromising their autonomy. The impact of this relatively new program is currently being evaluated.