Medical Imaging Informatics



Medical Imaging Informatics





5.0 INTRODUCTION

Medical informatics represents the process of collecting, analyzing, storing, and communicating information that is crucial to the provision and delivery of patient care. Medical imaging informatics is a subfield that addresses aspects of image generation, processing, management, transfer, storage, distribution, display, perception, privacy, and security. Communications ontologies and standards, computers and networking, picture archiving and communications systems, life cycle of a radiology exam from within and outside a radiology department, and business considerations are areas highlighted in this chapter.


5.1 ONTOLOGIES, STANDARDS, PROFILES



  • 1. Ontologies



    • a. A collection of terms and their relationships to represent concepts in this application—medicine



      • (i) Common vocabularies and standardization of terms for representation of knowledge.


      • (ii) Benefits include enhancing interoperability between information systems, sharing of structured content, and integrating knowledge and data.


    • b. SNOMED-CT: Systematized Nomenclature of Medicine—clinical terms



      • (i) Standard, multilingual vocabulary of clinical terminology used by physicians and providers


      • (ii) Supported by the National Library of Medicine


      • (iii) Designated as the national standard for additional categories of information in the EHR


      • (iv) Enables semantic interoperability and supports exchange of validated health information between providers, researchers, and others in the healthcare environment


    • c. ICD: International statistical Classification of Diseases and related health problems



      • (i) ICD is in its 10th revision (ICD-10) and is sponsored by the World Health Organization.


      • (ii) Manual with codes for diseases, signs, symptoms, abnormal findings, and external causes of injury.


      • (iii) In the United States, ICD-10-CM (clinical modification) and ICD-10-PCS (Procedure Coding System) by Centers for Medicaid and Medicare Services (CMS) assigns codes for procedures, and diagnoses for conditions and diseases (69,000 diagnosis and 70,000 procedure codes).


    • d. CPT: current procedural terminology—describes procedures performed on the patient



      • (i) Manual published by the American Medical Association.


      • (ii) Physicians’ bill is paid for services performed in a hospital or other place of service.


      • (iii) CPT codes are updated frequently; often human coding teams or automated software assist in the verification and validation of codes for specific procedures for reimbursement.


    • e. RadLex: radiology lexicon—sponsored by the Radiological Society of North America



      • (i) Radiology-specific terms and vocabulary for anatomy, procedures, and protocols


      • (ii) RadLex playbook assigns RPID (RadLex Playbook Identifier) tags to the terms


    • f. LOINC: Logical Observation Identifiers Names and Codes—sponsored by the Regenstrief Institute



      • (i) A more widely adopted ontology over many medical domains.


      • (ii) RadLex is being harmonized into the LOINC coding schema.



  • 2. Standards Organizations



    • a. ANSI: American National Standards Institute—coordinates standards development in the United States.



      • (i) Accredits Standards Development Organizations (SDOs) and designates technical advisory groups to the International Organization for Standardization (ISO)


    • b. In healthcare, the two most important SDOs are Health Level 7 (HL7) and the National Electrical Manufacturers Association (NEMA).


  • 3. Internet Standards



    • a. The Internet Engineering Task Force (ITEF) of the Internet Society develops protocol standards.


    • b. TCP/IP: Transmission Control Protocol/Internet Protocol—links devices worldwide.


    • c. HTTP: HyperText Transfer Protocol: application protocol for distributing collaborative hypermedia information systems is the foundation for data communications for the World Wide Web.


    • d. HTML: HyperText Markup Language—standard for documents to be displayed in a web browser.


    • e. URL: Uniform Resource Locater: specifies syntax and semantics for location/access via the Internet.


    • f. NTP: Network Time Protocol; SMTP: Simple Mail Transfer Protocol; IMAP: Internet Message Access Protocol; MIME: Multipurpose Internet Message Extensions—provide standardized synchronization protocols and basis for interactive transactions on the Internet.


    • g. TLS: Transport Layer Security and SSL: Secure Socket Layer define cryptographic mechanisms.


    • h. XML: eXtensible Markup Layer encodes structured data and serializes it for communication.


  • 4. DICOM—Digital Imaging and COmmunications in Medicine



    • a. Standards-based protocols for exchanging, storing, and viewing medical images.


    • b. Managed by MITA (Medical and Imaging Technical Alliance)—a division of NEMA.


    • c. The structure of the DICOM standard is divided into parts 1 to 21 (see textbook, Pg. 112, Table 5-1).


    • d. DICOM is an open, public standard defined and regulated by public communities.


    • e. DICOM Conformance Statement is a required document by vendors that specifies DICOM services and data types that the implementation can provide, but doesn’t guarantee availability.


    • f. DICOMWeb: standard for web-based medical imaging using architectural styles for hypermedia systems using interfaces for simple, lightweight, and fast interactions.


  • 5. HL7—Health Level 7



    • a. Standard for the exchange, integration, sharing, and retrieval of electronic health information.


    • b. HL7 International is the ANSI-accredited organization for developing standards.


    • c. Interoperability between information systems using HL7 is achieved through messages and documents.


    • d. Common message types: ORU—provides results; ORM—provides orders; ADT—admission, discharge, transfer interactions.


    • e. Common segment types in a message: OBR—observation requests; OBX—observation results.


    • f. HL7 version 2 and HL7 version 3 are common HL7 implementations.


    • g. HL7 FHIR (“Fire”—Fast Health Interoperability Resources) uses application programming interface (API) methods and file formats such as XML to interact with systems with fast, lightweight access.


  • 6. IHE—Integrating the Healthcare Enterprise



    • a. Does not generate Information Technology Standards but promotes their use to ensure interoperability.


    • b. IHE International consists of over 150 member organizations.


    • c. Organized in “domains” including radiology, cardiology, pathology, etc.


    • d. Planning committee: strategize the direction and coordination of activities from stakeholders.


    • e. Technical committee: develops Integration Profiles (IP) to describe specific strategies, standards (e.g., DICOM and HL7), and solutions used to solve informatics workflow and interoperability challenges.


    • f. Profile testing occurs in a Connectathon—a vendor-neutral, monitored testing environment.


    • g. IHE conformance to a specific IP can be used as a contractual obligation to achieve interoperability.


5.2 COMPUTERS AND NETWORKING



  • 1. Computer Hardware (Workstation)



    • a. Components include the motherboard, the central processing unit (CPU), the graphics processing unit (GPU), random access memory (RAM), network card, video display card, storage devices (solid-state and spinning disk drives), and peripherals (keyboard, mouse, microphone, video camera, printers).



    • b. Configuration (typical for workstations): 32 to 64 GB RAM: multi-core CPU operating at 1 to 5 GHz clock speed: GPU processing for fast, parallel image processing; terabyte local disk storage; gigabit/s network speed; high-resolution displays (3 to 5 megapixel, large format, portrait orientation).


    • c. Thin client workstations: can be less capable, depend on server-side rendering to provide services.


  • 2. Software and Application Programming Interface



    • a. Software programs consist of sequences of instructions executed by a computer.


    • b. Application programs perform a specific function, for example, email, word processing, web browser..


    • c. System software are the files and programs that make up the computer’s operating system (OS).



      • (i) Microsoft Windows, Mac OS, Linux Unbuntu.


      • (ii) System files manage memory, input/output devices, system performance, and error messages.


      • (iii) Firmware: operational software embedded in a chip to provide start-up instructions, such as a BIOS (Basic Input/Output System) on a motherboard to wake up hardware and boot the OS.


    • d. Programming language translators allow software programmers to translate human understandable high-level source code (C++, Java, Python) into machine language code instructions for a CPU.


    • e. Utility software sits between the OS and supplication software for diagnostic and maintenance tasks (e.g., antivirus, disk partition, file compression/decompression, firewall algorithms).


    • f. Application Programming Interface (API)—defines the way to request services from an application



      • (i) Private APIs have specifications for a company’s products and services that are not shared.


      • (ii) Public (open) APIs can be used by a third party without restriction.


      • (iii) Local APIs provide database access, memory management, security, for example, Microsoft.NET.


      • (iv) Web APIs represent HTML resources addressed using the HTTP protocol.


      • (v) REST (Representational State Transfer) architectural pattern for creating web services; a RESTful service implements that pattern using a set of simple, well-defined operations—FHIR and DICOMweb use the RESTful API to request and receive independent data access.


  • 3. Networks and Gateways



    • a. Local area network (LAN) connects computers within a department, building, or buildings.


    • b. Wide area network (WAN) connects computers at large distances—consisting of multiple LANs with longdistance communications links—the largest WAN is the Internet itself.


    • c. Networks permit transfer of information between computers using hardware and software components.



      • (i) Hardware connections include copper wiring, coaxial cable, fiber-optic cable, and electronic, such as radiowave and microwave links used by Bluetooth and Wi-Fi.


      • (ii) Software operates between the user application program and hardware communications link, necessitating network protocols for communication and provision of services.


      • (iii) Computers and switching devices (nodes) share communications pathways on a network; protocols divide information into packets that contain header information identifying the destination, and communications pathways between the nodes are called links.


    • d. Network bandwidth is the data transfer rate of a link or connection; throughput is the maximum rate.



      • (i) Megabits/second (Mbps) or gigabits/second (Gbps); 100 Mbps to 10 Gbps are typical rates.


      • (ii) Must also accommodate overhead from protocols (packet framing, addressing, etc.).


      • (iii) Latency is the time delay of a transmission between two nodes.


    • e. Network layer model: the ISO-OSI (Open Systems Interconnection) seven-layer stack (Fig. 5-1).



      • (i) Conceptual framework to describe the functions of a network system.


      • (ii) Communications begin at the upper (7th) layer by an applications program, passing information to the next lower layer in the stack, with each layer adding information (e.g., addresses).


      • (iii) The lowest (layer 1) is the physical layer that sends information to the destination computer.


      • (iv) The destination computer passes information back up the stack, removing information as it moves upward and reaches the intended application.


    • f. Model of network interconnection of medical imaging with a DICOM process (Fig. 5-2).



      • (i) Transmission Control Protocol—Internet Protocol (TCP-IP) is the packet-based protocol used by the Internet for information transfer operating at the lower layers of the ISO-OSI stack.


      • (ii) DICOM interactions occur at higher levels (session and presentation).


      • (iii) Application entity (image modality or server) is at the upper layer (7) for image handling.













    • g. LANs use Ethernet—a “connections-based” protocol.



      • (i) Star topology is most often used for point-to-point connections (Fig. 5-3).


      • (ii) Ethernet bandwidth specified as 100 Mbit, Gbit or 10 Gbit rates between LAN segments.


      • (iii) Connections require fiberoptic, twisted pair copper, Wi-Fi, or cellular signal transmission.


    • h. Routers (smart switches) connect local networks using the internet protocol (IP) and operate at the network protocol stack level to route messages to the destination address.



      • (i) Packet switching is performed by the router.


      • (ii) Each computer and router are assigned IP addresses.


    • i. Network address protocols (Fig. 5-4).



      • (i) IP v4 dot-decimal: 32-bit address with 8 bit encoding per segment, for example, 152.79.110.12.


      • (ii) IP v6 hexadecimal: 128 bit address—larger address space.


      • (iii) Host names are convenient ways to designate a specific computer on the network.


      • (iv) Domain Name System (DNS) is an Internet service consisting of servers that translate host names into IP addresses.













    • j. Virtual Private Network (VPN) provides encryption and authentication that allows use of the public Internet WAN in a safe and secure manner.


    • k. MultiProtocol Label Switching (MPLS) defines the path between nodes in web-based networks.


  • 4. Servers



    • a. A computer on a network that provides a service to other computers on the network


    • b. Applications servers, database servers, web servers, and cloud servers are commonly implemented.


    • c. VM (virtual machine) environments house servers in a flexible environment whereby CPUs, GPUs, RAM, operating system, and file space can be allocated to meet server needs in an efficient manner.


    • d. Cloud server runs in a VM environment that is hosted and delivered on a cloud-computing platform via the Internet with remote access.


    • e. Cloud-based PACS with server-side rendering requires several computers for data extraction, DICOM conversion, image rendering, storage, and load balancing.


    • f. Client-server relationships: thick client and thin client.



      • (i) Thick client—the client computer provides most information processing and requires substantial CPU and GPU processing—the major function of the server is to store information.


      • (ii) Thin client—information processing is provided by the server—the client serves to display the information only and therefore does not require much processing or storage capabilities


  • 5. Cloud Computing



    • a. Using a network of remote computers and software hosted on the Internet to deliver a service


    • b. Examples: Google email, Dropbox storage provider, some PACS vendors


    • c. Provisioned services: SaaS—software as a service; IaaS—infrastructure as a service; PaaS—platform as a service


    • d. Advantages: reduced operating costs; accessibility to files and data with an Internet connection; recovery of local computer files that are lost or damaged; automated synchronization; increased layers of security; faster information deployment with flexible sizing


    • e. Disadvantages: dependence on an Internet connection with restricted upload/download bandwidth; physical storage devices often needed locally for high-performance access; customer support often lacking; concern about privacy and who owns the data.


    • f. Major Providers: Amazon Web Services (AWS); ServerSpace; Microsoft Azure; Google Cloud.


  • 6. Active Directory



    • a. LDAP—Lightweight Directory Access Protocol—accesses and maintains directory information services over an IP network


    • b. AD—Active Directory—is a service developed by Microsoft for Windows domain networks and servers


    • c. Authenticates and authorizes all users and computers, and enforces security policies on the network


  • 7. Internet of Things (IoT)



    • a. IoT encompasses connected devices on the Internet that have a unique identifier (UID).


    • b. Connected devices provide information and access about their environment and the way they are used.


    • c. In healthcare, IoT applications include remote monitoring of smart sensors, integration of dialysis machines, and all imaging modalities in a radiology department, as examples.


    • d. Device use requires regular patching and updating to reduce risk from outside threats (hackers) and maintenance of security.



5.3 PICTURE ARCHIVING AND COMMUNICATIONS SYSTEM



  • 1. PACS Infrastructure



    • a. A collection of software, interfaces, display workstations, and databases for storage, transfer, and display of medical images (Fig. 5-5)







    • b. “Mini-PACS” provide modality-specific capabilities for handling images not available in a large PACS.



      • (i) Examples include mini-PACS for mammography, nuclear medicine, and ultrasound.


    • c. “Federated PACS” includes independent PACS functionality at different sites and sharing of DICOM images and information through a software federation manager.


    • d. “Web-server PACS” provides remote access to images for referring clinicians and healthcare staff.


    • e. Emergency backup server and business continuity procedures are important infrastructure needs.


    • f. VPN—virtual private network—provides security for devices and software and access by authorized users only through the implementation of an “enterprise firewall.”


  • 2. Image Distribution



    • a. Networks are a key to the distribution of images and associated information in a PACS.


    • b. Bandwidth requirements are dependent on the imaging modalities and their composite workloads.


    • c. Network design must consider both peak and average bandwidth requirements and tolerable delays.


    • d. Segmentation of networks with bandwidths sufficient to meet peak demand are often required.


  • 3. Image Size and Image Compression



    • a. Medical image acquisition in radiology involves massive amounts of data (Table 5-1).


    • b. Image compression reduces image size for efficient network transmission and storage.



      • (i) Various compression-storage/retrieval-decompression algorithms are used.


      • (ii) Redundancies in data can be used to reduce the number of bytes in an image or image series.


    • c. Reversible compression, also known as lossless, preserves the original image data after decompression.



      • (i) Typical compression ratios are 3:1 to 5:1 for most images.










    • d. Irreversible compression, also known as lossy, results in loss of information and detail fidelity.



      • (i) Compression ratios that are “acceptable” depend on the image type (DX, MG, CT, MR…).


      • (ii) More complex single images have less acceptable compression ratio (up to approximately 10:1).


      • (iii) Data series and video clips (ultrasound) can get approximately 25:1 ratio with acceptable image quality.


      • (iv) Too much compression renders images nondiagnostic (see textbook, Fig. 5-8).


    • e. The FDA, under the Mammography Quality Standards Act (MQSA), prohibits irreversible compression of digital mammography images for retention, transmission, or final interpretation.



      • (i) Exceptions to this rule include images from previous studies if deemed of acceptable quality.


      • (ii) Digital breast tomosynthesis studies use irreversible compression with little loss of fidelity, because of similarities of image content from adjacent reconstructed slices.


  • 4. Archive and Storage



    • a. Archive is a location containing records, document, and objects of historical importance.



      • (i) For radiology PACS, the archive is a long-term storage of medical images in DICOM format.


      • (ii) For an enterprise PACS, the archive is a vendor neutral archive (VNA) that stores all documents, nonimage data (e.g., EKG traces), non-DICOM (e.g., jpeg), and DICOM images.


      • (iii) Vendors use reversible compression to efficiently store data; proprietary, nonstandard compression schemes require vendor-provided proprietary decompression.


    • b. Archive protection from failures and disasters.



      • (i) RAID—redundant array of independent disks (level 5)—to protect from disk failure


      • (ii) Backup mirror copies—on site and geographically remote, to protect from environmental issues


    • c. Archive and storage management software algorithms.



      • (i) Central versus distributed archives.


      • (ii) On-demand management provides instantly available images.


      • (iii) Pre-fetching algorithms retrieve previous comparison studies to a local disk from slow media.


    • d. Hierarchical storage management.



      • (i) On-line—instantaneous access to images


      • (ii) Near-line—storage on less accessible, less expensive media with electronic retrieval—used with pre-fetching algorithms


      • (iii) Off-line—storage that isn’t accessible without human intervention (e.g., disks on shelves)


    • e. Archives and storage in the cloud environment provide great expansion and extensibility.



  • 5. DICOM, HL7, and IHE



    • a. DICOM—represents standards that provide the ability to transfer images and related information between devices (e.g., image acquisition and image storage).



      • (i) Specifies standard formats, services, and messages between devices


      • (ii) Specifies standards for workflow, storage of images on removable media, and consistency and presentation of displayed images


      • (iii) Specifies standard services performed on information objects, such as storage, query and retrieve, storage commitment, and media storage


    • b. DICOM Information Model.



      • (i) Information Object Definition (IOD) entities include modalities (e.g., CT—computed tomography; DX—digital x-radiography; MG—mammography; MR—magnetic resonance imaging; NM—nuclear medicine; PT—positron emission tomography).


      • (ii) ServiceObject Pair (SOP) is a union of an IOD and DICOM Message Service Element (DIMSE)—for example, “Store a CT study.”


      • (iii) Service Class is a collection of related SOPs with a specific definition of a service supported by cooperating devices to perform an action on a specific IOD class; for example, Service Class User (SCU)—invokes operations; Service Class Provider (SCP)—performs operations.


      • (iv) Unique Identifier (UID) is a value (usually a long-string number segmented by periods) associated with a particular SOP class describing a DICOM transfer syntax.


    • c. Common DICOM vocabulary terms and acronyms (Table 5-2).


    • d. DICOM Modality WorkList (MWL)—provides patient demographics for technologist selection.


    • e. Performed Procedure Step (PPS)—indicates the status of a procedure.


    • f. Gray scale Softcopy Presentation State (GSPS)—captures and stores image adjustments, measurements, and notes for a specific patient.


    • g. Hanging protocols: instructions on how to present images according to radiologist’s preference on a workstation display; uses DICOM image metadata including anatomic laterality and projection, current versus previous studies, modality, acquisition protocols, display format, and other parameters.










    • h. HL7 is the standard for messaging contextual data between the various information systems (EMR, RIS, LIS) (Laboratory Information System) and PACS for patient demographics, scheduling, reporting, etc.



      • (i) The modality worklist (MWL) broker is the translation device for HL7 messages (patient demographics, etc.) to transfer to the PACS and modalities using DICOM.


    • i. IHE provides integration profiles that use DICOM and HL7 standards to ensure interoperability among different systems and enhance functionality for improved/efficient patient care.


  • 6. Downtime Procedures, Data Integrity, Other Policies



    • a. PACS is a critical part of medical management of patients that rely on availability of images.



      • (i) A well-established set of policies and procedures are necessary to ensure business continuity.


    • b. Downtime procedures must account for scheduled and unscheduled events.



      • (i) Planned downtime: software upgrades, bug fixes, security patches, and preventive maintenance—schedule more time than needed, and have a roll-back plan in an unsuccessful attempt


      • (ii) Unplanned downtime: mitigated by redundancy for critical systems and assessing system architecture for points of failure, having built-in fault tolerance


    • c. Data integrity procedures should be in place—for example, when MWL is unavailable, entry of patient demographic data should be indicated, and if images fail to send from modality, manual send to PACS


  • 7. PACS Quality Control



    • a. PACS maintenance and quality control challenges



      • (i) Verification of automated subsystem components—MWL, radiologist worklist, speed of image access and display, hanging protocols, GSPS, image manipulation, digital speech recognition, prompt transmission of diagnostic reports, accuracy of data and measurements


      • (ii) Ensuring adequate display capabilities and appropriate environmental viewing conditions


    • b. Image display technical considerations



      • (i) Considers the human visual system and optimization of information transfer to the viewer.


      • (ii) Medical diagnostic display types: liquid crystal display (LCD) with backlight or organic light emitting diode (OLED)—most displays are color LCD at the present time.


      • (iii) Display bit depth—most medical grade monitors provide 10 to 12 bits of gray scale (1,024 to 4,096 shades of gray) per pixel; consumer grade displays typically 8 bits per pixel—determined by the video card hardware and firmware.


      • (iv) Display size: medical diagnosis—54 cm diagonal (21.3 inch) with 1,536 pixels (horizontal) and 2048 pixels (vertical) portrait mode, with pixel dimension of ˜0.2 mm—“3-megapixel” (MP) monitor that optimizes human visual acuity at a distance of 50 to 60 cm (arms length).


      • (v) Mammography display: 5 MP monitor (2,000 × 2,500 pixels) with pixel dimension ˜0.16 mm. (vi) Large format displays—76 cm diagonal (30 inch) provide “seamless” virtual portrait displays. (vii) Luminance is the rate of light energy emitted or reflected from a surface per unit area per solid angle, measured in candela/m2 (cd/m2); perceived brightness is not proportional to luminance. (viii) Display luminance: medical diagnostic displays—350 cd/m2 or higher; mammography 450 cd/m2 or higher (often up to 1,000 to 1,200 cd/m2). (ix) Other display categories: modality, clinical specialist, EHR applications—with decreasing specifications (e.g., 2 MP, lower luminance less than 200 cd/m2) and cost, respectively.


    • c. Image perception and viewing conditions.



      • (i) Assessment of perceived contrast and spatial resolution of the human viewer.


      • (ii) Determined respectively by experiments of contrast signals at a fixed spatial frequency, and high contrast signals at variable spatial frequencies (detail) that are just detectable.


      • (iii) Illuminance is environmental light impinging on a surface, units of lux; 20 to 50 lux is optimal.


      • (iv) Human perception of contrast signals are nonlinear with luminance and have a limited range of fixed visual adaptation for a given luminance level (Fig. 5-6).


      • (v) Luminance ratio is the detection range of threshold contrast—depends on contrast generated in the display as determined by digital driving levels (DDL) and just-noticeable differences (JND) detected by the viewer; to optimize contrast across gray scale, luminance ratio is approximately 250 to 350:1.


      • (vi) Display function describes the display luminance produced as a function of the digital signal, often called a digital driving level (DDL).


    • d. DICOM GSDF—gray scale standard display function, DICOM part 14



      • (i) Standardizes the display of image contrast for gray scale images


      • (ii) Provides perceptual linearization, where equal differences in pixel values received by the display system are perceived as equal by the human visualization system








      • (iii) Just-noticeable differences (JND) and JND index—DICOM GSDF defines the JND index as the input value to the GSDF such that one step in the JND index results in a luminance difference that is a justnoticeable difference, as described by the Barten curve (Fig. 5-7).


      • (iv) Measured display function is modified to the “Barten curve” using a Look-Up-Table (LUT) that is used by the video card connected to the display to achieve conformance.







    • e. Display calibration



      • (i) A calibrated photometer and software are used for display measurement—the software generates DDLs to produce the display luminance, which is measured by the photometer.


      • (ii) The luminance values are recorded in a stepwise fashion for the display function measurement.


      • (iii) Software calculates values in a LUT that causes the display system to conform to the GSDF.


      • (iv) The LUT is downloaded to the display card so that DDLs will be translated (Figs. 5-8).


      • (v) Annual evaluation (minimum) for GSDF, luminance, and luminance ratio are recommended.


      • (vi) Qualitative evaluation using SMPTE (Society of Motion Picture and Television Engineers) or AAPM OIQ (overall image quality) patterns for 5% contrast in darkest and brightest areas (Fig. 5-9).