Networking, data and image handling and computing in radiotherapy

Chapter 12 Networking, data and image handling and computing in radiotherapy





Introduction




Computers permeate every aspect of modern radiotherapy. With their application, we can deliver ever more intricate treatments. However, every technological innovation comes at a cost – in equipment, maintenance and added complexity. With care, these costs can be kept in balance and benefits to the patient achieved.


Computing in radiotherapy has developed as computers have grown more powerful. In 1965, Gordon Moore, co-founder of the computing giant Intel, predicted that microchip complexity would increase exponentially for at least 10 years [1]. In fact, Moore’s law still holds today, with processors doubling in complexity approximately every 2 years. Moore’s law typifies the growth in all aspects of computing. Hard disk drive capacity nearly doubles each year while the cost per gigabyte nearly halves [2].


This growth has consequences for radiotherapy. The useful lifetime of any item of computer software or hardware is going to be fairly short. In the NHS, a lifetime of 5 years is used for IT equipment, compared to 10 years for a linear accelerator [3]. This growth also needs to be considered when planning for long-term needs. It will be more costly to purchase 5 years’ data storage immediately than it will be to buy half now and half in 2 years’ time, at a quarter of the cost.



History


For many decades, radiotherapy was developed without computers. Isodose planning and linear accelerators both pre-date the use of computers. In the 1960s, the first investigations were made into using computers for radiotherapy. Such use required access to one of the small number of large ‘mainframe’ computers, probably at a university, since personal computers were not invented until the 1970s. The first widespread use of computers was for calculating isodose distributions, replacing laborious manual calculations. In the late 1970s, the first use was made of computed tomography (CT) data for radiotherapy planning. This required a magnetic tape reader connected to the planning system to read the CT scanner tapes.


As networking technology became available, it was possible to connect computers directly together. However, CT scanners still used different file formats. In 1985, the American College of Radiology and the National Electrical Manufacturers Association created a standard for medical images called ACR-NEMA. In the third version, in 1991, the standard was developed to specify how systems should communicate directly. To mark this change a new name for the standard was used: ‘Digital Imaging and Communication in Medicine’, or DICOM 3.0.


In the 1980s, the first record and verify systems appeared. Later, it became possible to transmit plans from the planning computer to the record and verify system, driven by the use of multileaf collimators in the 1990s. It was also in the 1990s that electronic portal imaging devices were developed. Accompanying them was a need to obtain reference images in electronic form, whether digital simulator images or digitally reconstructed radiographs.


At the turn of the millennium, radiotherapy centres were filled with computer systems transferring data. Managing these could be difficult and expectations frequently exceeded the realities of what could be achieved. These frustrations led to the recent development of radiotherapy, or oncology, information systems. These act as a central hub of the network: receiving and organizing data from various sources. This coordination of data is going to be vital for the era of image guided radiotherapy.



Benefits and hazards of computerization


The more advanced techniques that we practise would not be possible at all without the assistance of computers. Intensity modulated radiotherapy would be totally unfeasible without computers or data communication. Even simple techniques are enhanced by the use of computers. If data are transferred directly between computers there is less chance of human error. The use of record and verify systems has certainly reduced the number of random human errors at the point of treatment delivery [4].


Ultimately, the hazards of computerization relate to our increased dependence on computer systems and the faulty assumption that they are reliable. Computers and networks do fail. A radiotherapy department that is reliant on a system can be crippled if that system is the victim of component failure, a virus, or even theft.


As systems become more complicated, they are harder to check for errors. It should never be assumed that the computer is correct. Furthermore, computers do not remove the risk of human error altogether. Proper use of a system relies on operators performing tasks correctly. If they do not then incorrect information can be passed down the line and used at a later stage. Where errors do occur in computerized systems they may affect a series of patients or a single patient for every fraction of their treatment [5].


In order to reduce these risks, we need to reverse the assumption that the computer is reliable. Key checks should be made, during system testing and for individual treatments. Is the isocentre correct? Is the patient reference position correct? Where there are multiple data sets, has the correct one been used for the treatment? When every plan had to be calculated and delivered manually, each person involved had to understand fully the technique and the treatment. Now that the treatment can be delivered at the touch of a button, that level of understanding is critical to detect errors.



Networking


Networks are commonly described as having a number of layers. In one of the simpler descriptions, the TCP/IP model, four layers are described [6]. In order to understand these layers, we draw a comparison with the postal system (Table 12.1).


Table 12.1 The four network layers of the TCP/IP model























Layer Network Postal
Link The infrastructure of cables, switches, modems and so on which allow information to be moved from one place to another The infrastructure of vehicles and sorting offices which allow post to be moved around
Network The addressing system which allows a single computer to be identified and for data to be routed to it The addressing system which allows post to be directed to the correct destination
Transport The process of packing the data and sending it into the network. Also the process of receiving data from the network Taking letters to the post office to send and picking up received letters
Application Defining the data to be sent and interpreting any data received Writing and reading your post

Because the network is layered in this way, any changes that occur in one layer do not affect the others, so you do not need to write a new application because you have bought a new modem.




Network layer: addressing


When you type ‘www.estro.be’ into a web browser, how does your computer connect to the computer that contains the homepage of the European Society for Therapeutic Radiotherapy and Oncology? Each domain name is registered to a computer address called an IP address which consists of four numbers. Your computer initially sends a request to a directory to find out the correct IP address. Then it sends a request to that IP address to retrieve the correct web page.


Inside a private network, such as a hospital, computers are still referenced using an IP address. Special ranges of numbers are reserved for use in private networks to avoid confusion with Internet traffic. On many windows computers, it is possible to see the network configuration by typing the command ‘ipconfig’ at a command prompt1. If you do this, and your computer is attached to a network, you will see something like that shown in Figure 12.3.


Stay updated, free articles. Join our Telegram channel

Mar 7, 2016 | Posted by in GENERAL RADIOLOGY | Comments Off on Networking, data and image handling and computing in radiotherapy

Full access? Get Clinical Tree

Get Clinical Tree app for offline access