Quality and Safety in Medical Imaging

Quality and Safety in Medical Imaging

Bryan P. Bednarz

John R. Vetter

Ionizing radiation has been used in medicine for various diagnostic and therapeutic purposes since its discovery over a century ago. It is universally accepted that noninvasive or minimally invasive diagnostic imaging is a crucial component of health care and the benefits from such procedures greatly outweigh any associated risks. Modern imaging technology is replacing the need for more invasive procedures, which in turn is making the practice of medicine more precise, safe, and cost-effective. However, it is well known that ionizing radiation can be viewed as a “double-edged sword”; despite all of its beneficial uses, overexposure to ionizing radiation can be detrimental to human health. As a result, strict radiation safety practices to patients and medical personnel have been paramount to the success of modern diagnostic imaging. As the use of diagnostic imaging involving ionizing radiation continues to rapidly increase, there has been an even greater emphasis on radiation safety, particularly as patients and medical personnel are living longer than ever before, leaving more time for negative health effects from ionizing radiation to manifest.

This chapter is meant to provide an overview of radiation safety principles and practices in diagnostic imaging. First, a brief overview of the history of radiation safety in diagnostic imaging is covered. Next, several definitions of important terms used in radiation safety are presented, which will be followed by a section covering the biologic effects of radiation damage in humans and a section overviewing the current usage and amount of radiation dose received from several types of diagnostic radiology scans. Finally, the remaining sections of the chapter are devoted to issues related to patient and occupational radiation safety.


It became clear since the early usage of diagnostic imaging that ionizing radiation could impose serious, if not fatal, health effects when not used safely. One of the first documented cases of radiation damage to humans occurred in early 1896 when two scientists named Daniel1 and Dudley at Vanderbilt University observed skin changes and epilation following an experimental radiograph of Dudley’s head. Not surprisingly, documented health effects due to early diagnostic examinations, like those inflicted on Dudley, were almost entirely related to skin damage considering the low energy and intensity of the sources being used for diagnostic imaging. For example, in 1897, Scott2 of Cleveland reported on 69 cases of skin damage, and in 1902, Codman3 of Boston reported on 170 cases of skin damage both due to X-ray examinations. Despite these reports, the use of X-rays for diagnostic purposes continued to rise in the early part of the 20th century, and several physicians and scientists began to investigate the root causes of radiation damage. Some hypothesized that indirect effects caused the damage, as did famed inventor Nikola Tesla4 in 1897, who proposed that the radiation near the X-ray tube produced both ozone and nitrous oxide, which subsequently led to skin damage. However, in the same year, Thomson5 correctly attributed the effects directly to damage caused by the X-rays themselves on human tissue.

It was also around this time that radiation protection practices were being implemented into clinical practice. In 1896, Fuchs6 was the first to provide recommendations on radiation exposure, advising operators to make the exposure as short as possible, to stand at least a foot from the X-ray tube, and coat the skin with petroleum jelly and leave an extra layer on the most exposed area. This in fact covered the three basic tenets of radiation protection (i.e., time, distance, and shielding) all within a year of the discovery of the X-ray. In 1897, Walsh7 described the reduction of acute effects in radiation workers using lead aprons. Pfahler8 in 1901 introduced a shielding apparatus during radiographic examinations placing a thin ring-shaped sheet of aluminum around the cathode tube, and as a result introduced the concept of collimation. Several years later, Pfahler9 again was the first to recommend the use of film for personnel monitoring.

Although the first decade of use of X-rays for diagnostic examinations led to a better understanding of early effects such as skin damage caused by X-rays and important safety improvements in these procedures, much less was known about late effects that could be
caused by radiation exposure. In fact, it was not until 10 to 30 years after exposure to individuals that late effects, most notably second cancers, were being diagnosed particularly in radiologists. While multiple reports in the early part of the 20th century on radiation-induced cancer in animals were published, it was not until 1902 when Frieben10 reported on a cancer in a patient that was believed to be developed following chronic ulceration. In 1911, Hesse11 collected histories of 94 cases of tumors induced in individuals by radiation, where 54 of these cases were among physicians or technicians. The same year, Krausse12 reported on the death of 54 radiologists from occupational exposure and further described 126 cases in 1930. Three years later, Feygin13 tabulated 104 cases of cancer caused by irradiation, and in 1922, Ledoux-Lebard14 estimated that 100 radiologist had died from chronic radiation exposure.

By the early 1920s, it was clear that guidelines and standards needed to be established that aimed to protect patients and hospital staff from unsafe levels of radiation exposure. This effort was spearheaded by a group of medical professionals in England who established the X-ray and Radium Protection Committee and released the first recommendations on radiation safety practices in 1921.15 A similar safety committee was established in 1920 by the American Roentgen Ray Society in the United States, whose recommendations published in 1923 were closely modeled after those produced in England.15

Because of the infancy of the field, early radiation safety recommendations were heuristic in nature, but these committees recognized that a standard unit of measurement for radiation was needed. In fact, the first International Congress of Radiology in London in 1925 was almost entirely devoted to discussions of international units and standards for X-ray work.15 At this meeting, the International X-ray Unit Committee was established with the primary objective to propose a unit for radiation measurements as applied to medicine. This committee eventually evolved into the International Commission on Radiation Units and Measurements (ICRU). Motivated by the excitement on standards at the International Congress of Radiology, the Radiological Society of North America (RSNA) established the Standardization Committee to investigate and propose a standard unit of measurement for X-rays.16 The Committee published their first series of recommendations in March of 1926. Concurrently, the Standardization Committee also sent multiple influential letters to scientists and politicians that highlighted the need for the establishment of an X-ray unit within the National Bureau of Standards (NBS) to develop a standard unit for X-ray measurements.16 This effort led to the formation of the X-ray Measurement and Protection Unit within the NBS in 1927.16 Note the NBS is now known as the National Institute of Standards and Technology (NIST). By the second International Congress of Radiology in Stockholm in 1928, the X-ray Measurement and Protection Unit officially proposed the adoption of the roentgen unit as a measurement of electrostatic charge formed in air by X-rays.16

International efforts on the establishment of a radiation protection committee were also underway. Although interest in the formation of an international radiation protection committee was discussed at the first ICRU, a formal committee was not formed until the second International Congress of Radiology and was called the International X-ray and Radium Protection Committee, which eventually became the International Commission on Radiological Protection (ICRP).15 The International X-ray and Radium Protection Committee also recommended that each represented country develop a coordinated program of radiation protection, which led to the formation of the U.S. Advisory Committee on X-ray and Radium Protection. This committee is now known as the National Council on Radiation Protection and Measurements (NCRP). Along with the ICRP, the NCRP develops guidelines to protect individuals and the public from excessive radiation exposure. These two bodies alongside several national and international professional organizations have maintained a commitment to ensure the safe use of radiation in medicine in the United States and abroad.


There are two important nonstochastic quantities that are used to describe the impact of radiation on a medium. These quantities are (1) the exposure (X) and (2) the absorbed dose (D). The exposure is the amount of ionization that is produced in air from photons whereas dose represents the energy imparted to a medium by all kinds of radiation, but ultimately delivered by charged particles. The exposure is defined as

where dq is the total charge of one sign produced in air when all electrons liberated by photons in air of mass dm deposit all of their energy in the air. The classical unit of exposure is the Roentgen (R), which is equivalent to the production of 2.58 × 10-4 C kg-1 in dry air.

From a radiation safety standpoint, it is most convenient to focus on absorbed dose. By definition, the absorbed dose is equal to

where ε is the expectation value of the energy imparted in a finite volume V at a point P. The expectation value is appropriate, given that energy imparted by charged particles is fundamentally a stochastic process governed by laws of probability and subject to statistical uncertainty. A reasonable estimate of the mean of this process is only realized once enough events accumulate in V. The SI unit for dose is the Gray (Gy), where

Another quantity that accounts for the differences in radiation quality is known as the equivalent dose. Since only photons are used in diagnostic imaging, which has a radiation weighting factor of unity, it can be assumed that the absorbed dose is equal to the equivalent dose, which has units of sievert (Sv). Given that equivalent dose corresponds to the energy deposited in tissue, it is often the preferred quantity over exposure in diagnostic radiology.

Each tissue or organ in the human body responds differently to ionizing radiation. For the same absorbed dose, the probability of inducing a stochastic effect in one organ will be different from that in another. To account for these differences, tissue-weighting factors have been developed by the ICRP and NCRP. The product of the equivalent dose and the tissue-weighting factor gives a quantity that correlates to the overall detriment to the body from damage to the organ or tissue being irradiated. The detriment includes both mortality and morbidity risks associated with cancer and severe genetic effects. The sum of the tissue-weighting factors equals unity. The total risk for all stochastic effects in an irradiated individual is known as the effective dose, which is defined as

The unit of effective dose is the sievert (Sv). It is important to remember that the concept of effective dose was designed for
radiation protection purposes. It reflects the total radiation detriment from an exposure averaged over all ages and both sexes. Effective dose is calculated in a reference computational phantom that is not representative of a single patient given that the phantom is androgynous and of an age representing the average age of a working adult. It should be noted that recent recommendations have called for a modified effective dose calculation procedure that uses sex-specific phantoms. Effective dose should never be used to predict risk to an individual patient, but only to compare potential risk between different exposures to ionizing radiation, such as computed tomography (CT) examinations. For example, effective dose can be used to compare relative risks, averaged over the population from different proposed CT protocols or scanners.

Given the large volume of CT scans performed each year and associated concerns about absorbed dose to the patient, various metrics have been developed to characterize dose from CT scans. The most fundamental radiation dose metric used for CT imaging is the Computed Tomography Dose Index (CTDI).17,18 The CTDIvol parameter is most commonly displayed on CTDI scanners. The CTDIvol is derived from measurements of the CTDI100, which is defined as

where n is the number of slices acquired during the scan, T is the width of each slice, and D(z) is the dose profile resulting from single axial rotation measured from a 10-cm-long ionization chamber. The chamber is placed either at the center or at the periphery of a polymethyl methacrylate dose phantom. Typically, the measurements are made in units of air kerma (K), where K (mGy) = 8.73 × X (R). There are two standard dose phantoms used to acquire the CTDI100: one has a 16-cm diameter and the other has a 32-cm diameter. Both of these phantoms have a length of 15 cm. Such measurements may be used to provide an indication of the average dose delivered over a single slice. The CTDIvol value is given as

where the pitch is defined as the ratio of the table feed (in mm) per 360° gantry rotation to the nominal beam width (nT), and CTDIw is given by

Several variables impact CTDIvol, including tube voltage, tube current, gantry rotation time, and pitch.

Another parameter that is often used to characterize dose from a CT scan is called the dose length product (DLP). The DLP is simply defined as

where the scan length is the product of the total number of scans and the scan width. Given that the intention of the DLP is to provide information about the total exposure over the entire volume of the scan, an approximation of the effective dose from the scan can be made using a DLP to effective dose conversion factor. The effective dose is given as

where k is the conversion factor (typically in units of mSv × mGy-1 × cm-1) that varies depending on the region of the body being scanned. The conversion factor k is derived from Monte Carlo calculations in a computation anthropomorphic phantom.


Biologic effects from ionizing radiation are generally classified as being stochastic or deterministic. Stochastic effects are injuries that manifest from the damage of one or only a few cells. Stochastic effects include hereditary effects and cancer. Deterministic effects result from damage to a large collection of cells, leading to damage to tissue or entire organs and systems in the body. Therefore, incidence and severity increase as a function of dose once a certain threshold for the effect to occur has been reached. Tissue reactions that are classified as deterministic effects include skin burns, hair loss, loss of thyroid function, and cataracts.


The most studied radiation-induced stochastic effect is cancer. Most of the data on radiation-induced cancer risks come from Japanese atomic bomb survivors through the Life Span Study (LSS), although other exposed populations have been studied as well, including patients receiving medical treatments, occupationally exposed groups, and environmentally exposed groups. Given these data, a clear linear relationship has been established between cancer induction and absorbed dose at high doses. Exceptions to this relationship have been found for leukemia and nonmelanoma skin cancer in atomic bomb survivor data and bone cancer in radium dial painters. It is also well known that the damage to a single cell or small number of cells can result in the induction of cancer even at very low doses. However, the exact relationship between the absorbed dose and the induction of a cancer in humans at low doses associated with diagnostic imaging procedures has been the subject of intense debate. For radiation protection purposes, it is generally assumed that a linear no-threshold (LNT) relationship exists between dose and effect, although evidence for a variety of other dose-effect relationships exists as illustrated in Figure 2.1. If the LNT holds for low doses, then a general rule of thumb for
fatal cancer risk of 5% per sievert effective dose for a working adult has been proposed. For more detailed risk calculation methods, one should refer to the Biological Effects of Ionizing Radiation (BEIR) Report VII.20

FIG. 2.1 • Illustration of different cancer risks versus dose relationships at low doses that have been derived from human and animal studies.19 Curve 1 is known as the linear no-threshold (LNT) model. In this model, it is assumed that the risk of cancer formation is directly proportional to the dose. Most recommending bodies related to radiation safety recommend the LNT model for solid cancer formation induced by low doses of ionizing radiation. Curve 2 is the linear-quadratic (LQ) model. There is significant evidence to support that the LQ model best represents radiation-induced leukemia risk. Several investigations in humans and animals have also demonstrated a hormesis effect as shown in curve 3, where below some dose threshold radiation exposures to humans have produced a positive benefit. Finally, supralinear responses as shown in curve 4 where hypersensitivity of risk is expected at low doses have also been documented.

Skin Burns

Skin reactions from ionizing radiation have been well documented, particularly from external beam radiation therapy. Factors that impact the severity of the skin reaction include total dose, time interval between incremental exposures (known as dose fractionation), and size of the irradiated area on the patient. The most sensitive site on the patient is the anterior portion of the neck followed by the flexor portion of the extremities, the trunk, the back, the extensor surfaces of the extremities, the back of the neck, the scalp, and the palms of the hands and soles of the feet, in that order.21 Skin reactions include damage to the epidermis, dermis, and subcutaneous tissue. Skin reaction from diagnostic imaging can be classified by severity following the NCI Skin Reaction Grade, as shown in Table 2.1.21 This classification also considers the approximate time of onset of the effects. Prompt reactions occur within 2 weeks following exposure, early reactions occur 2 to 8 weeks after exposure, midterm reactions occur 6 to 52 weeks after exposure, and long-term reactions occur more than 40 weeks after exposure.


Severe adverse effects to the eye from radiation were reported within years after the discovery of X-rays,22 and cataract formation was one of the earliest effects observed among atomic bomb survivors.23,24 It is now well known that the subcapsular lens epithelium, particularly where it differentiates to lens fibers, is susceptible to radiation damage. The development of radiation-induced cataracts is dependent on radiation dose, dose rate, and age of the lens25 and is a known late effect from radiation exposure.26,27,28,29 While the current guidelines for the threshold dose of cataract formation ranges from 2 to 5 Gy, recent studies indicate that the dose could be less than 0.5 Gy on the basis of evidence from various exposure situations.30,31 There is also strong evidence that cataract risk is better described by an LNT model.30 While limited data are available on cataract formation resulting from diagnostic exposures, studies have indicated that very low doses can lead to cataract formation 25 years or more after exposure.32


Skin Dose Range (Gy)a

NCI Grade

Approximate Time to Onset of Skin Effects

Prompt (<2 wk)

Early (2-8 wk)

Midterm (6-52 wk)

Long Term (>40 wk)



Observable effects not expected

Observable effects not expected

Observable effects not expected

Observable effects not expected



Transient erythema


Observable effects not expected

Observable effects not expected



Transient erythema

Erythema, epilation

Prolonged erythema, permanent partial epilation

Possible dermal atrophy or induration



Transient erythema

Erythema, epilation, possible dry or moist desquamation

Prolonged erythema, permanent epilation

Telangiectasia, dermal atrophy or induration



Transient erythema; possible edema and acute ulceration

Dermal atrophy, secondary ulceration due to failure of moist desquamation to heal, dermal necrosis

Telangiectasia, dermal atrophy or induration, possible late skin breakdown, wound progression into deeper lesion

a Acute dose to single site.

Only gold members can continue reading. Log In or Register to continue

Oct 14, 2018 | Posted by in GENERAL RADIOLOGY | Comments Off on Quality and Safety in Medical Imaging
Premium Wordpress Themes by UFO Themes