The passage of radiation, such as x-rays and gamma rays, through a given material leads to ionizations and excitations that can be used to quantify the amount of energy deposited. This property allows measurement of the level of intensity of a radiation beam or small amounts of radionuclides, including from within the patient. The appropriate choice of detection approach depends on the purpose. In some cases, the efficient detection of minute amounts of the radionuclide is essential, whereas in other cases the accurate determination of the energy or location of the radiation deposited is most important. A variety of approaches to radiation detection are used, including those that allow for in vivo imaging of radiopharmaceuticals.
Radiation detection
Consider the model of a basic radiation detector, as shown in Fig. 2.1 . The detector acts as a transducer that converts radiation energy to electronic charge. Applying a voltage across the detector yields a measurable electronic current. Radiation detectors typically operate in either of two modes, current mode or pulse mode. Detectors that operate in current mode measure the average current generated within the detector over some characteristic integration time. This average current is typically proportional to the exposure rate to which the detector is subjected or the amount of radioactivity within the range of the detector. In pulse mode, each individual detection is processed with respect to the peak current (or pulse height) for that event. This pulse height is proportional to the energy deposited in the detection event. The histogram of pulse heights is referred to as the pulse-height spectrum. It is also referred to as the energy spectrum because it plots a histogram of the energy deposited within the detector.
Certain properties of radiation detectors characterize their operation. Some are applicable to all detectors, whereas others are used for detectors that operate in pulse mode. These characterizations are not only useful for describing the operation but can also give insight into the benefits and limitations of the particular detector.
The detection efficiency depends on several factors, including the intrinsic and extrinsic efficiency of the detector. The intrinsic efficiency is defined as the fraction of the incident radiation particles that interact with the detector. It depends on the type and energy of the radiation and the material and thickness of the detector. For photons, the intrinsic efficiency, D I , is given to first order by:
DI=(1−e-μx)
The extrinsic efficiency is the fraction of photons or particles emitted from the source that strike the detector. It depends on the size and shape of the detector and the distance of the source from the detector. If the detector is a considerable distance from the source (i.e., a distance that is >5 times the size of the detector), the extrinsic efficiency, D E , is given by:
DE=A/(4πd2)
DT=DI×DE
In pulse mode, the pulse height is proportional to the energy deposited within the detector. However, the uncertainty in the energy estimation, referred to as the energy resolution, depends on the type of detector used and the energy of the incident radiation. For a photon radiation source of a particular energy, the feature associated with that energy is referred to as the photopeak, as shown in Fig. 2.2 . The width of the photopeak, as characterized by the full width at half of its maximum (FWHM) value normalized by the photon energy represented as a percentage, is used as a measure of the energy resolution of the detector.
When the detector is subjected to a radiation beam of low intensity, the count rate is proportional to the beam intensity. However, the amount of time it takes for the detector to process an event limits the maximum possible count rate. Two models describe the count rate limitations: nonparalyzable and paralyzable. In the nonparalyzable model, each event takes a certain amount of time to process, referred to as the dead time, which defines the maximum count rate at which the detector will saturate. For example, if the dead time is 4 μs, the count rate will saturate at 250,000 counts per second. With the paralyzable model, the detector count rate not only saturates but can “paralyze”—that is, lose counts at very high count rates. Gamma cameras, for example, are paralyzable systems.
The three basic types of radiation detectors used in nuclear medicine are gas detectors, scintillators, and semiconductors. These three operate on different principles and are typically used for different purposes.
Gas detectors are used every day in nuclear medicine for assaying the amount of radiopharmaceutical to be administered and to survey packages and work areas for contamination. However, because of the low density of gas detectors, even when the gas is under pressure, the sensitivity of gas detectors is not high enough to be used for clinical counting and imaging applications.
A gas radiation detector is filled with a volume of gas that acts as the sensitive material of the detector. In some cases it is air, and in others it is an inert gas such as argon or xenon, depending on the particular detector. Electrodes are located at either end of the sensitive volume. The detector circuit also contains a variable voltage supply and a current detector. As radiation passes through the sensitive volume, it causes ionization in the gas. If a voltage is applied across the volume, the resulting ions (electrons and positive ions) will start to drift, causing a measurable current in the circuit. The current will last until all of the charge that was liberated in the event is collected at the electrodes. The resulting current entity is referred to as a pulse and is associated with a particular detection event. If only the average current is measured, this device operates in current mode. If the individual events are analyzed, the device is operating in pulse mode.
Fig. 2.3 shows the relationship between the charge collected in the gas detector and the voltage applied across the gas volume. With no voltage, no electric field exists within the volume to cause the ions liberated in a detection event to drift, and thus no current is present and no charge is collected. As the voltage is increased, the ions start to drift, and a current results. However, the electric field may not be sufficient to keep the electrons and positive ions from recombining, and thus not all of the originally liberated ions are collected. This portion of Fig. 2.3 is referred to as the recombination region. As voltage is increased, the level is reached at which the strength of the electric field is sufficient for the collection of all of the liberated ions (no recombination). This level is referred to as the saturation voltage, and the resulting plateau in Fig. 2.3 is the ionization chamber region. When operating in this region, the amount of charge collected is proportional to the amount of ionization caused in the detector and thereby to the energy deposited within the detector. Ionization detectors or chambers typically operate in current mode and are the detectors of choice for determining the radiation beam intensity level at a particular location. They can directly measure this intensity level in either exposure in roentgens (R) or air kerma in rad. Dose calibrators and the ionization meters used to monitor the output of an x-ray device or the exposure level from a patient who has received a radiopharmaceutical are examples of ionization (or ion ) chambers used in nuclear medicine.
If the voltage is increased further, the drifting electrons within the device can attain sufficient energy to cause further ionizations, leading to a cascade event. This can cause substantially more ionization than with an ionization chamber. The total ionization is proportional to the amount of ionization initially liberated; therefore these devices are referred to as proportional counters or chambers. Proportional counters, which usually operate in pulse mode, are not typically used in nuclear medicine. If the voltage is increased further, the drifting electrons attain the ability to cause a level of excitations and ionizations within the gas. The excitations can lead to the emission of ultraviolet radiation, which also can generate ionizations and further excitations. This leads to a terminal event in which the level of ionization starts to shield the initial event, and the level of ionization finally stops. This is referred to as the Geiger-Müller process. In the Geiger-Müller device, every event leads to the same magnitude of response, irrespective of the energy or the type of incident radiation. Thus the Geiger-Müller meter does not directly measure exposure, although it can be calibrated in a selected energy range to milliroentgens per hour (mR/hr). However, the estimate of exposure rate in other energy ranges may not be accurate. The Geiger-Müller survey meter is excellent at detecting small levels of radioactive contamination and thus is often used to survey radiopharmaceutical packages that are delivered and work areas within the nuclear medicine clinic at the end of the day.
Scintillation detectors
Some crystalline materials emit a large number of light photons upon the absorption of ionizing radiation. This process is referred to as scintillation, and these materials are referred to as scintillators. As radiation interacts within the scintillator, a large number of excitations and ionizations occur. On de-excitation, the number of light photons emitted is directly proportional to the amount of energy deposited within the scintillator. In some cases, a small impurity may be added to the crystal to enhance the emission of light and minimize the absorption of light within the crystal. Several essential properties of scintillating materials can be characterized, including density, effective Z number (number of atomic protons per atom), amount of light emitted per unit energy, and response time. The density and effective Z number are determining factors in the detection efficiency because they affect the linear attenuation coefficient of the scintillation material. The amount of emitted light affects both energy and, in the gamma camera, spatial resolution. Resolution is determined by the statistical variation of the collected light photons, which depends on the number of emitted photons. Finally, the response time affects the temporal resolution of the scintillator. The most common scintillation crystalline material used in nuclear medicine is thallium-doped sodium iodide (NaI) with lutetium oxyorthsilicate (LSO) or lutetium yttrium oxyorthosilicate (LYSO) most commonly used in positron emission tomography (PET).
Once the light is emitted in a scintillation detector, it must be collected and converted to an electrical signal. The most commonly used device for this purpose is the photomultiplier tube (PMT). Light photons from the scintillator enter through the photomultiplier entrance window and strike the photocathode, a certain fraction of which (approximately 20%) will lead to the emission of photoelectrons moving toward the first dynode. For each electron reaching the first dynode, approximately a million electrons will eventually reach the anode of the photomultiplier tube. Thus the photomultiplier tube provides high gain and low noise amplification at a reasonable cost. Other solid-state light-detection approaches are now being introduced into nuclear medicine devices. In avalanche photodiodes (APDs), the impinging light photons lead to the liberation of electrons that are then drifted in the photodiode, yielding an electron avalanche. The gain of the APD is not as high as with the PMT (several hundred compared with about a million), but the detection efficiency is substantially higher (approximately 80%). A second solid-state approach is the silicon photomultiplier tube ( SiPMT ). This device consists of hundreds of very small APD channels that operate like small Geiger-Müller detectors—that is, each detection is a terminal event. The signal from the SiPMT is the number of channels that respond to a particular detection event in the scintillator. SiPMTs have moderate detection efficiency (approximately 50%) and operate at low voltages. One further advantage of APDs and SiPMTs compared with PMTs is that they can operate within a magnetic field. Thus the development of positron emission tomography/magnetic resonance (PET/MR) scanners has involved the use of either APDs or SiPMTs.
Solid-state technology is used to detect the light from a scintillation detector and also can be used to directly detect gamma rays. The detection of radiation within a semiconductor detector leads to a large number of electrons liberated, resulting in high energy resolution. The energy resolution of the lithium-drifted germanium (GeLi) semiconductor detector has approximately 1% energy resolution compared with the 10% energy resolution associated with a sodium iodide scintillation detector. However, thermal energy can lead to a measurable current in some semiconductor detectors such as GeLi, even in the absence of radiation, and thus these semiconductor detectors must be operated at cryogenic temperatures. On the other hand, semiconductor detectors such as cadmium telluride (CdTe) or cadmium zinc telluride (CZT) can operate at room temperature. CdTe and CZT do not have the excellent energy resolution of GeLi, but at approximately 5%, it is still significantly better than that of sodium iodide.
The pulse height spectrum corresponding to the detection of the 140-keV gamma rays from technetium-99m (Tc-99m) is illustrated in Fig. 2.4 . The photopeak corresponds to events where the entire energy of the incident photon is absorbed within the detector. These are the events of primary interest in most counting experiments, and thus the good events are within an energy acceptance window about the photopeak. Other events correspond to photons scattered within the detector material and depositing energy, which can range from very low energy from a very-small-angle scatter to a maximum 180-degree scatter (in the spectrum referred to as the Compton edge ). Events below the Compton edge correspond to these scattered events. In some cases, photons can undergo multiple scatters and possibly result in events between the Compton edge and the photopeak. Photons scattered within the patient and then detected may also result in events in this energy region. Finally, the pulse-height spectrum will be blurred depending on the energy resolution of the detector. Thus in Fig. 2.4 , the photopeak has approximately a 10% spread because of the energy resolution associated with NaI, rather than the narrow spike that might be expected from the emission of a monoenergetic gamma ray.
Ancillary Nuclear Medicine Equipment
Besides the imaging equipment in the nuclear medicine clinic, other additional ancillary equipment may be necessary from either a medical or regulatory point of view or to otherwise enhance the operation of the clinic. This equipment will be reviewed, including the quality control required for proper operation.
As previously discussed, the two basic radiation meters commonly used in the nuclear medicine clinic are the Geiger-Müller (GM) meter and the ionization chamber. Both are gas detectors, although they operate differently. With the GM meter, all detections lead to a terminal event of the same magnitude—a “click.” The device is excellent for detecting small amounts of contamination. It is routinely used to determine whether there is contamination on packages of radiopharmaceutical that are delivered to the clinic and to test working surfaces and the hands and feet of workers for contamination. GM meters often are equipped with a test source of cesium-137, with a very small amount of radioactivity, that is affixed to the side of the meter. On calibration, the probe is placed against the source, and the resulting exposure rate is recorded. The probe is tested daily using the source to ensure that the meter’s reading is the same at the time calibration. The GM meter should be calibrated on an annual basis.
The ionization chamber meter (ion chamber) operates in current mode and assesses the amount of ionization within an internal volume of gas (often air) and thus can directly measure exposure or air kerma rate. The ion chamber is used to evaluate the exposure rate at various locations within the clinic. For example, it could be used to measure the exposure rate in an uncontrolled area adjacent to the radiopharmaceutical hot laboratory. The ion chamber is also used to evaluate the exposure rate at a distance from a patient who has received radionuclide therapy (e.g., iodine-131 for thyroid cancer) to determine that the patient can be released without exposing the general public to unacceptable radiation levels. The ion chamber also should be annually calibrated.
The dose calibrator is an ionization chamber used to assay the amount of activity in vials and syringes. This includes the assay of individual doses before administration to patients, as required by regulation. The dose calibrator operates over a very wide range of activities, from tens of microcuries to a curie (hundreds of Bq to tens of GBq). The device is also equipped with variable settings for each radionuclide to be measured, with typically about 10 buttons for ready selection of the radionuclides commonly used in the clinic. In addition, buttons are available for user-defined radionuclide selection. Others can be selected by entering the appropriate code for that radionuclide into the system.
The dose calibrator is used to assay the activity administered to the patient, and thus a comprehensive quality control program is necessary. Regulations specify dose calibrator quality control program must meet the manufacturer’s recommendations or national standards. Typically, the program comprises four basic quality control tests: geometry, accuracy, linearity, and constancy.
The geometry protocol tests that the dose calibrator provides the same reading for the same amount of activity irrespective of the volume or orientation of the sample. A reading of a certain amount of activity in a 0.5-mL volume is obtained. The volume is then increased by augmenting the sample with amounts of nonradioactive water or saline and taking additional readings. The subsequent readings should not vary from the original readings by more than 10%. The geometry test is performed during acceptance testing and after a major repair or move of the equipment to another location.
For accuracy, calibrated sources (typically cobalt-57 and 137 Cs) are assayed; the resultant reading cannot vary by more than 10% from the calibrated activity decay corrected to the day of the test. The accuracy test should be performed during acceptance testing, annually thereafter, and after a major repair or move.
The linearity protocol tests that the dose calibrator operates appropriately over the wide activity range to which it is applied. The device is tested from 10 μCi (370 kBq) to a level higher than that routinely used in the clinic and perhaps as high as 1 Ci (37 GBq). The activity readings are varied by starting with a sample of the radioactivity of Tc-99m at the highest value to be tested (e.g., tens of gigabequerels). The activity readings are then varied by either allowing the source to radioactively decay over several days or using a set of lead shields of varying thicknesses until a reading close to 370 kBq is obtained. Each reading should not vary by more than 10% from the line drawn through the calculated activity values. The linearity test should be performed during acceptance testing, quarterly thereafter, and after a major repair or move.
The constancy protocol tests the reproducibility of the readings compared with a decay-corrected estimate for a reference reading obtained from the dose calibrator on a particular day. Today’s constancy reading cannot vary from the decay-corrected reference reading by more than 10%. The constancy test varies from accuracy in that it evaluates the precision of the readings from day to day rather than accuracy. The constancy test should be performed on every day that the device is used to assay a dose to be administered to a patient.
There are two nonimaging scintillator devices, the well counter, and the thyroid probe that are routinely used in the nuclear medicine clinic. The well counter is used for both radiation protection and clinical protocols. The thyroid probe can provide clinical studies with a fraction of the equipment costs and space requirements of the use of nuclear imaging equipment. However, these devices also require comprehensive quality control programs.
The well counter consists of an NaI crystal with a hole in it, allowing for test tubes, and other samples can be placed within the device for counting. The samples to be placed in the counter is practically surrounded by the detector, with a geometrical efficiency in excess of 90%. Thus the well counter can measure very small amounts of radioactivity, on the order of a kilobecquerel. The well counter should not be confused with the dose calibrator, which is a gas-filled ionization chamber that can measure activities up to 37 GBq. It is used to test packages of radiopharmaceuticals to ensure that no radioactivity has been spilled on the outside of the package or leaked from the inside. The device also can be used to measure removable activity from working surfaces where radioactivity has been handled or from sealed sources such as calibration sources to ensure that the radioactivity is not leaking out.
The well counter can also be used for the assay of biological samples for radioactivity for a variety of clinical evaluations. For example, after the administration of Tc-99m diethylenetriamepentaacetic acid (DTPA), blood samples can be counted at several time points (e.g., at 1, 2, and 3 hours) to estimate the patient’s glomerular filtration rate (GFR). The amount of radioactivity in a 0.2-mL blood sample will be very small, and thus the well counter is the appropriate instrument for these measurements. By making these measurements and the measurements of standards of known activity concentration (kilobecquerel per milliliter), the patient’s GFR can be estimated.
The thyroid probe consists of an NaI crystal on a stand with the associated counting electronics. The patient is administered a small amount of radioactive iodine. The probe is placed at a certain distance from the thyroid, and a count is obtained. In addition, a count is acquired of a known standard at the same distance. The thyroid uptake of iodine can be estimated from these measurements.
The quality control program for both the well counter and the thyroid probe includes the energy calibration, the energy resolution, the sensitivity, and the chi-square test. For the energy calibration, the energy window is set for the calibration source of a particular radionuclide—for example, the 662-keV peak of Cs-137. The amplifier gain is varied until the maximum count is found that corresponds with the alignment of the window with the 662-keV energy peak. In addition, the counts in a series of narrow energy windows across the peak can be measured to estimate the energy resolution. A standard window can be set, and the counts of a known calibration source can be counted and normalized by the number of nuclear transformations to estimate the sensitivity in counts per transformation (or counts per second per becquerel). Finally, the chi-square test evaluates the operation of the counter by comparing the uncertainty of the count to that expected from the Poisson distribution.
The Patient as a Radioactive Source
In nuclear medicine, the patient is administered a radiopharmaceutical that distributes according to a specific physiological or functional pathway. The patient is then imaged using external radiation detectors to determine the in vivo distribution and dynamics of the radiopharmaceutical through which the patient’s physiology can be inferred, providing this essential information to the patient’s doctor to aid in diagnosis, prognosis, staging, and treatment. The equipment used to acquire these data will be described in the sections ahead. Single-photon emission computed tomography (SPECT) and PET are described in the next chapter. However, before examining how the instrumentation operates, it is instructive to understand the nature of the signal itself—that is, the radiation being emitted from within the patient.
The radiopharmaceutical is administered to the patient most commonly by intravenous injection but also in some cases through other injection routes, such as intraarterial, intraperitoneal, or subdermal. In other cases, the radiopharmaceutical may be introduced through the gastrointestinal tract or through the breathing of a radioactive gas or aerosol. After administration, the path and rate of uptake depend on the particular radiopharmaceutical, the route of administration, and the patient’s individual physiology. However, the characteristics and parameters associated with the radiopharmaceutical in vivo distribution and dynamics are of considerable clinical importance. In some cases, the enhanced uptake of the radiopharmaceutical in certain tissues (e.g., the uptake of fluorodeoxyglucose [FDG] in tumors) may be of most clinical importance, whereas in other cases it may be the lack of uptake (e.g., the absence of Tc-99m sestamibi in infarcted myocardium). In the first case, this would be referred to as a hot-spot imaging task, and in the latter would be a cold-spot task. In other situations, it may be the rate of uptake (wash in) or clearance (wash out) that may be considered the essential characteristic of the study. In a Tc-99m mercaptoacetyltriglycine (MAG3) renal study, fast wash in may indicate a well-perfused kidney, and delayed clearance may indicate renal obstruction. In the Tc-99m DTPA counting protocol described previously, a slow clearance of the radiopharmaceutical from the blood would indicate a reduced GFR. In some cases, the ability to discern uptake in a particular structure that is adjacent to other nonspecific uptake may require the ability to spatially resolve the two structures, whereas other tasks may not require such specific resolution. The choice of instrumentation, acquisition protocol, and data-processing approach fundamentally depend on the clinical task at hand.
To characterize the rate, location, and magnitude of radiopharmaceutical uptake within the patient, the emitted radiation must be detected, in most cases, by detectors external to the patient’s body. Some instruments are specially designed for internal use—for example, interoperative radiopharmaceutical imaging—but in most the cases, the imaging device is located outside the body while detecting radiation internally. This requirement limits the useful emitted radiations for nuclear medicine imaging to energetic photons—that is, gamma rays and x-rays. The amount of overlying tissue between the internally distributed radiopharmaceutical and the radiation detector may vary from several centimeters to as much as 20 to 30 cm. Alpha and beta particles will not be of use in most cases because their ranges in tissue are limited to a few millimeters, and thus they will not exit the body and cannot be measured by external radiation detectors. Even x-rays and gamma rays must have energies in excess of 50 keV to penetrate 10 cm of tissue. On the other hand, once the radiation exits the patient, it is best that the radiation not be so energetic as to be difficult to detect with reasonable-size detectors. Thus the radiation types optimal for most nuclear medicine imaging applications are x-rays and gamma rays in the 50- to 600-keV energy range, depending on the equipment and collimation being used.
Consider a situation in which a radiopharmaceutical labeled with Tc-99m leads to a point source at some depth within the patient’s body. The 140-keV gamma rays will be emitted isotropically from the point source. Therefore it would be advantageous to place the radiation detector close to the source or to place several detectors around the source to collect as many of the emitted photons as possible. In fact, acquiring data from several angles may allow the source to be better localized. Those emitted photons that exit the body without interaction and are subsequently detected will yield the highest quality spatial information. Conversely, those photons that scatter within the patient compromise spatial information. Photons that undergo very-small-angle scatter will perhaps not be of much consequence, but those that undergo scatter at larger angles will not be of much use. Noting that the Compton-scattered photons have less energy than the incident photons, and that small-angle scatter leads to less energy loss than large-angle scatter, energy discrimination (i.e., only allowing photons to be counted within a narrow energy window about the photopeak energy) will lead to the elimination of a significant number of scattered photons from the nuclear medicine image. In contrast to the case of a point source, a more challenging clinical case with regard to scatter may be the imaging of a cold-spot feature, such as an infarction in a myocardial perfusion scan or a renal scar in a Tc-99m DMSA scan. In these cases, scattered photons in the neighboring tissue may be displaced into the cold spot, leading to a loss in image contrast and an inability to properly discern the extent of the feature. It must also be kept in mind that in a true clinical case, the distribution of the radiopharmaceutical is unknown, and background levels in other tissues may compromise the situation. The pulse-height spectrum from a patient is shown in Fig. 2.5 .