Chapter 13 Quality control
General
The process of measurement, testing or inspection devised to ensure that the function and the performance of a device is achieving a required level is referred to as quality control (QC). The requirement for routine quality control is recognized in the statutory Ionising Regulations (IRMER) [1]. The level required may be set out in Standards agreed by a national or international agency. For example IEC60601 [2], the general standard for medical devices was set by the International Electrotechnical Commission (IEC) which is part of the International Standards Organisation (ISO). IEC60601 was adopted by the European Union and thus became a Standard for the UK. Many parts of this standard concern the safety of radiotherapy treatment machines and simulators and one part particularly relates to the performance requirements and quality control of megavoltage treatment machines [3, 4]. Standards may also be set with respect to what is considered to be good practice or they may be set upon locally based experience. For example, radiotherapy physicists in the UK have identified good practice in many aspects of radiotherapy and published this in a document [5] referred to as IPEM81. An example of a locally set tolerance may be that the field size on a megavoltage treatment machine can be made to lie within ±1.5 mm when another type of machine can achieve ±1 mm for the same amount of work.
Effective quality control needs to fulfill two roles. One is to demonstrate the correct performance of a system and the other is to detect deterioration of the system in order to take corrective action. Demonstration and detection are, in fact, two sides of the same coin. The boundary between each side is set in accordance with appropriate limits chosen with regard to the distribution of a measured parameter as shown in Figure 13.1 for the example of a 10 × 10 field.

Figure 13.1 Illustration of limits chosen to encompass what is considered to be satisfactory performance within the distribution of measurements of a parameter. For example, one side of a field size dimension, such as the X jaw of a 10 × 10 field.
Quality control immediately following a repair of equipment demonstrates that performance has been restored or detects problems with the repair.
In radiotherapy, quality control has become increasingly significant over the past 20 years. It has been recognized that, in order to achieve and maintain high standards of safe and effective dose delivery, there is a need for not only quality assurance systems which ensure safe, effective and traceable processes, but also for effective quality control checks to demonstrate and ensure satisfactory operation of equipment. In 1985, the ICRP [6] recognized the role that quality control played in the protection of the patient in radiotherapy. The hazards of radiation to normal tissue were well known and the risk of oncogenesis [7–9] further complicates the use of radiation in cancer treatment. These concerns were reinforced by accidents [10], some of which were due to safety failures, others due to performance failures and, of course, some due to human error. It has been clear that the absence or deficiencies of quality control had an adverse affect in some of these accidents. Many accidents highlighted the need for standards to be established in both safety and performance.
The acceptable variation in dose delivery has been identified from clinical experience and radiobiological studies to be ±5% [11, 12]. This gives some perspective on the seriousness of, for example, the Exeter accident in 1988 [13] when a 25% overdose was delivered to many patients between February and July of that year. Systems of quality control checks endeavour to ensure that treatment equipment operates both safely and effectively.
A commitment to quality control
For quality control to be effective, there must be a commitment to examine aspects of a machine’s operation. It is self-evident that provided an aspect does not show itself up in clinical practice and is never measured, it is never a problem. The saying ‘leave well alone’ comes to mind.
The Exeter overdose was apparent in skin reactions and raised concerns. However, that in itself did not prompt a measurement check, yet this was an overdose of 25%.
… it is seen that even a person whose treatment had started as early as mid February would not necessarily have shown any abnormal signs until about mid May [13].
An underdose may well go on for a much longer period of time and the incident at Stoke [14], when an underdose of approximately 20% continued for several years, serves as a demonstration of this.
At Exeter, there were clinical concerns by the end of May and the beginning of June 1988. On July 4, the calibration of February 12 was still thought to be correct by the Physics Department and it was not re-measured. No further measurement would have taken place until August. The IPEM dosimetry survey [15] measurement, which revealed the error, was done on July 12.
Safety, position and dose
There are two aspects to dose delivery. First, the physical amount of dose and, secondly, is it in the correct position. Both aspects are significant and, in addition to safety systems, are the key to satisfactory radiotherapy performance.
When we consider the many and various components within the radiotherapy process, these two aspects are identifiable as where uncertainty in the treatment delivery can occur. For example, the calibration of CT number for dose prediction is a dose aspect while the movement accuracy of the scanner and processing of the image matrix will be a positional aspect.
Frequency, tolerances and failure trends
The classical assumption for performance deterioration is a linear change with time. Consider a tolerance of ±3% in dose delivered with action being taken at that level to restore the performance to the required value. Figure 13.2 illustrates such a change in performance with time.

Figure 13.2 Change in performance represented by a linear change of a parameter over a time period t after which corrective action is taken to restore the performance.
Considering the effect of the length of the time period t:
If t was one day then, depending upon when a patient was treated and assuming they attended at the same time each day, then their dose for the entire treatment will be different by between 0% and −3% over the full course of treatment.
For all treatments of duration greater than or equal to 5 days, the variation from the required dose will be approximately −1.5%. For treatments of 4 days or less, the variation will be between 0% and −3% depending upon the day and the number of days treated. For example, a 3-day treatment on days 2, 3 and 4 will have a mean difference of −(0.6% + 1.2% +1.8%)/3 = −1.2%. However, an 8-day treatment on days 1 to 8 will have a mean difference of −1.575%
For treatments of duration greater than or equal to 20 days, the variation from the required dose will be about −1.5%. For treatments of 19 days or less, the variation will be between 0% and −3% depending upon the day and the number of days treated.
Hence the length of t can have a variable effect on the dose delivered to the patient.
If t is long, say several times as long as a total course treatment time, T, where typically that can be between 3 to 6 weeks, the following occurs compared to having t equal to T. The variation in the dose is shown in Figure 13.3.

Figure 13.3 Change in performance represented by a linear change of a parameter over a time period when t=T, (upper) and t=5T, (lower).
This shows that the final batch of patients could be receiving a dose difference up to almost double that for a period of t which is about as long as a fractionation regimen. The deterioration may not be controllable. There is, therefore, a need to choose a tighter tolerance for this parameter.
Indeed, the performance deterioration is likely to be far from linear and more likely to be non-linear as in Figure 13.4.

Figure 13.4 Change in performance represented by a non-linear change of a parameter over a time period t after which corrective action is taken to restore the performance.
This has significance for the interval between checks, although that needs to be balanced against resources. For example, frequent recording of parameters on a machine will perhaps pick up the deterioration in a parameter as it begins. However, in order to do this effectively without any automated recording and analysis system requires sufficient staff to run-up a machine, taking note of parameters and using the machine in different modes and energies in order to vary many settings on the machine.
When a parameter is satisfactorily checked and it lies within tolerance it demonstrates compliance. If at the next check the parameter is found to be outside of tolerance then the assumption will be made that it has deteriorated linearly since the previous check. However, the worst case of deterioration will be as in Figure 13.5.

Figure 13.5 A worse case non-linear change of a parameter over a time period t after which corrective action is taken to restore the performance.
By identifying parameters with such a trend in deterioration, increased sampling or adaptive sampling can be justified.
Measurement and uncertainty
When a measurement is made, the ‘best estimate’ of the parameter is determined and not its exact value.
The best estimate of a parameter is normally the mean of a number of measurements and the spread in the measurements gives an indication of the variation in the best estimate due to measurement conditions, measurement equipment, experimental set-up and other factors which contribute to the variation of the measurements upon which the best estimate is based.
Typically, the best estimate is thought of in terms of a Gaussian distribution with a mean, M and standard deviation, σ representing the best estimate and the variation respectively (Figure 13.6).
The question that follows is how certain can we be that the best estimate is close to the true value of the parameter? This can be quantified in terms of the distribution. For example, 95% of the area under the Gaussian distribution lies within plus and minus two standard deviations of the mean, ±2σ. Hence, there is a 5% uncertainty that the true value does not lie within ±2σ of the best estimate, M.
Null hypothesis
This is an essential aspect of all statistical analysis. The null hypothesis (H0) is the hypothesis that the measurement M is not different from the reference value R. Consider a measurement with a best estimate mean M and standard deviation σM. If the null hypothesis applies with 95% confidence then R lies within the boundaries of M ±2σM.
Put another way, the null hypothesis asks: is the best estimate M which has been measured really different

Stay updated, free articles. Join our Telegram channel

Full access? Get Clinical Tree

