Real-Time Volumetric Free-Hand Ultrasound Imaging for Large-Sized Organs: A Study of Imaging the Whole Spine





Abstract


Objectives


Three-dimensional (3D) ultrasound imaging can overcome the limitations of conventional two-dimensional (2D) ultrasound imaging in structural observation and measurement. However, conducting volumetric ultrasound imaging for large-sized organs still faces difficulties including long acquisition time, inevitable patient movement, and 3D feature recognition. In this study, we proposed a real-time volumetric free-hand ultrasound imaging system optimized for the above issues and applied it to the clinical diagnosis of scoliosis.


Methods


This study employed an incremental imaging method coupled with algorithmic acceleration to enable real-time processing and visualization of the large amounts of data generated when scanning large-sized organs. Furthermore, to deal with the difficulty of image feature recognition, we proposed two tissue segmentation algorithms to reconstruct and visualize the spinal anatomy in 3D space by approximating the depth at which the bone structures are located and segmenting the ultrasound images at different depths.


Results


We validated the adaptability of our system by deploying it to multiple models of ultrasound equipment and conducting experiments using different types of ultrasound probes. We also conducted experiments on six scoliosis patients and 10 normal volunteers to evaluate the performance of our proposed method. Ultrasound imaging of a volunteer spine from shoulder to crotch (more than 500 mm) was performed in 2 minutes, and the 3D imaging results displayed in real-time were compared with the corresponding X-ray images with a correlation coefficient of 0.96 in spinal curvature.


Conclusion


Our proposed volumetric ultrasound imaging system might hold the potential to be clinically applied to other large-sized organs.


Introduction


Ultrasound is extensively utilized in medical imaging due to its notable advantages of being rapid, convenient, and cost-effective. Compared to the conventional B-mode ultrasound, three-dimensional (3D) ultrasound has the capacity to overcome limitations in structural observation and measurement [ ]. By merging multiple cross-sectional images, it enables comprehensive and detailed visualization of anatomical structures, leading to enhanced accuracy and precision in diagnosing various medical conditions. Therefore, researchers and medical practitioners have been exploring and implementing volumetric ultrasound imaging in clinical areas such as the carotid arteries [ – ], thyroid [ , ], kidney [ , ], heart [ – ], and breast [ ].


However, for large-sized organs such as the liver, muscle, and spine [ ], real-time 3D ultrasound imaging is rather challenging. One of the primary difficulties is the increased time and data volume that needs to be processed in real-time due to the organ’s size. This requires not only high computational performance but also poses challenges for patients who must maintain immobility for longer periods during the imaging process. Furthermore, the presence of abundant soft tissue surrounding the organ can introduce interference in the identification of organ features, ultimately impacting the physician’s observation of the 3D image [ ]. To address the above challenges, we designed a real-time volumetric ultrasound imaging system and tested its performance in applying it to the clinical diagnosis of scoliosis.


Scoliosis is a medical condition characterized by a 3D spine deformity, involving lateral deviation and axial rotation of the vertebral column [ ]. Adolescent idiopathic scoliosis (AIS) is the most prevalent form of scoliosis affecting about 5% of kids in China [ ]. Regular diagnostic imaging, specifically radiography (commonly known as X-ray photography) [ ], is essential for both clinical diagnosis and monitoring of patients with AIS [ ]. However, Simony et al. [ ] discovered that the AIS patients they examined had an overall cancer rate of 4.3%, which was five times higher than the age-matched Danish population. Considering the patient’s health, doctors today are consciously reducing the frequency of X-ray usage. Nevertheless, whether undergoing treatment or annual physical examination, adolescents with scoliosis still use X-ray examinations at least every six months until skeletal maturity [ ]. In contrast to X-ray photography, 3D ultrasound imaging offers advantages for scoliosis examinations. First, it has the ability to visualize all types of structures in the spine, and second, it is a radiation-free imaging technique, eliminating the potential risk of X-ray radiation exposure [ ]. Additionally, 3D ultrasound imaging can also provide a 3D view of the spine, enabling better visualization of spinal structures. This enhanced visualization facilitates accurate measurements of the ultrasound curve angle (UCA), a crucial parameter in assessing scoliosis [ , ].


These advantages above motivated researchers to develop systems for 3D ultrasound imaging of the spine. Purnama et al. [ ] employed a segmented scanning and stitching method to address the challenges posed by a large amount of data and long acquisition time. However, this approach compromised real-time imaging capabilities and their final 3D imaging lacked an effective way to visualize spinal anatomy. Jiang et al. [ ] presented a new method for real-time imaging of the spine. In this study, a rendering method was utilized to directly generate coronal images from the raw image dataset, eliminating the need for reconstructing the 3D volume and resulting in significant time savings. Additionally, the image information was captured from a fixed depth within the skin to obtain spinal anatomy. Victorova et al. [ ] used robotic manipulators to automate the 3D ultrasound assessment for scoliosis. However, this system requires the operator to first manually define the path for the robot to follow. There are also many studies based on the Scolioscan (TMIL, Shenzhen, China) [ – ]. But it is important to note that the system uses a custom-designed linear probe with a frequency of 4–10MHz and a width of 10 cm [ ]. While the wide probe allows for data acquisition to be accomplished with a single-column scan, we found that the large probe struggled to conform well to the patient’s uneven back when collecting patient data. Moreover, for some obese people, ultrasound probes with high center frequency have poor imaging quality in deep tissues, which may affect the imaging of spinal anatomy.


Currently, there are two primary approaches for presenting structural information of the spine. The first method involves performing bone surface feature extraction either manually or using trained neural networks after the data acquisition is completed [ – ]. However, manual operation is time-consuming, and neural networks may be influenced by the presence of soft tissues surrounding the bone structure, resulting in unsatisfactory spinal anatomy results. The second method is based on a simple projection method to obtain an anatomical map of the spine by extracting high-brightness or fixed-depth information [ , ]. This method is simple to implement and can therefore be applied to the real-time visualization of spinal anatomy. However, it ignores the interference of some tissue above the bones and the fact that the thoracic and lumbar vertebrae are at different depths from the skin surface.


In this study, we proposed a new approach to deal with the above challenges. Unlike the customized large line array probes utilized in the Scoliscan system, our system uses different ultrasound probes by fixing generic molds. The ability to use various types of probes ensures that the system has the required depth of data acquisition and skin fit. However, the use of small ultrasound probes results in increased imaging time for the same region. To compensate for this issue, we introduced a novel solution for body immobilization that provides more directional restrictions than other studies did [ ]. We also introduced a real-time algorithm for optical marker recognition, leading to faster localization and a higher sampling frame rate which enables a quicker scanning and reduces data acquisition time. To address the difficulty of processing a substantial volume of data in real-time, we implemented an incremental imaging method coupled with algorithmic acceleration to ensure that the acquired image data can be reconstructed and visualized in real-time. Additionally, to acquire spinal anatomy for clinical diagnosis, intercepting the information at a fixed depth limits the probe to remain in the same direction during the scanning process. We proposed a feature extraction method for dynamically recognizing the depth of the cut that allows free movement during the scanning process. This not only simplifies the requirements for the physician but also minimize the impact on the final imaging results due to inconsistencies in the depth of the thoracic and lumbar spine relative to the skin. Overall, these achievements we made in real-time processing, optical localization, system adaptability, and anatomical feature extraction contribute to the application of our real-time volumetric free-hand ultrasound imaging system, particularly for large-sized organs.


Method


Data acquisition


Our proposed system uses a camera to localize an optical marker tied to the ultrasound probe to acquire the spatial position of the ultrasound probe. The ultrasound machine display screen is transmitted in real-time to a computer display through a frame grabber, and we obtain the corresponding B-mode image data by taking screenshots.


In the first stage, we selected 10 healthy volunteers for 3D ultrasound imaging of the spine to analyze the reliability of the imaging system. In the second stage, we recruited six scoliosis patients at the Department of Spine Surgery, Affiliated Drum Tower Hospital, Medical School of Nanjing University, China. The study was approved by the institutional ethics committee. All participants were fully informed about the study procedures and provided written informed consent before participating. Two system developers performed 3D ultrasound imaging of the spine on the participants, who also underwent a whole-spine radiographic examination within two weeks.


Calibration and positioning of ultrasound probes


Since the acquisition of B-mode images requires a more complex process, there is a delay between the screenshot and the ultrasound probe localization data, which makes the two unable to match. To address the issue of mismatch between two-dimensional (2D) ultrasound images and spatial localization data, we employed the temporal calibration method proposed by Treece et al. [ ] This method determines the system delay by comparing the vertical motion of a line segment in a 2D image with the vertical motion of the ultrasound probe captured by the localization system.


The spatial localization of the ultrasound probe as well as the calibration work relies on the optical calibration technique proposed previously by our research group [ ]. By combining Augmented Reality University of Cordoba (ArUco) code with circular arrays, the optical marker can improve the accuracy of optical localization to 0.02 mm. The positional relationship between the optical marker and the ultrasound image is based on a multi-layer N-line calibration technique. Ultimately, the entire system enables spatial localization of 2D ultrasound images with an accuracy of 1 mm.


Furthermore, we optimized the optical localization process to enhance the data acquisition frame rate and minimize sampling time. This was achieved by capturing the rough area of the optical marker in the previous frame obtained from the camera. Subsequently, we performed a cropping operation on the current acquired image, focusing only on the captured rough area. By employing this approach, we only need to recognize one-third or even a smaller area of the original image, thus increasing the rate of localization for the optical marker to 30ms.


Real-time 3D imaging


After carefully reviewing the findings of Solberg et al. [ ] and taking into account factors such as time consumption and imaging effectiveness, we decided to utilize the pixel-nearest neighbor (PNN) algorithm for the reconstruction process [ ]. This algorithm was selected based on its proven capabilities and suitability for our needs, ensuring desirable results in terms of both reconstruction speed and geometry measurements of large-sized targets.


To achieve real-time 3D imaging, this study employed an incremental imaging method with algorithmic acceleration using the CUDA Toolkit (CUDA, NVIDIA, USA) and multithreaded processing. Typically, in cases where the amount of accumulated data is large, it takes more than 3 minutes to reconstruct and visualize all the data at 1mm resolution. The incremental reconstruction method splits the image reconstruction process to reconstruct only the newly acquired data each time, reducing the amount of data in a single reconstruction process. Moreover, in the visualization process, only the areas that need to be updated are processed, which reduces the rendering workload and saves computational time substantially. We use multithreading processing to carry out the data acquisition, reconstruction, and rendering process simultaneously so that the system has the ability of real-time imaging. During the spine scanning process, the operator held an ultrasound probe with the long axis of the probe perpendicular to the direction of the spine and scanned three columns side by side from the cervical to the lumbar spine. Figure 1 demonstrates the real-time 3D imaging of the spine at the 40s, 80s, and 120s of data acquisition time. Once the data acquisition process was completed, real-time imaging ceased, and the entire image was automatically saved to the computer as slices along the x-axis.




Figure 1


Real-time 3D imaging of the spine at the 40s, 80s, and 120s of data acquisition time.


Feature extraction


When performing spine data acquisition on the human body, we found that there would be some high-brightness tissues above the spine, and the depth of the bone surface varied greatly in different locations of the spine, as shown in Figure 2 . These made it difficult for us to show the anatomy of the spine. To address this problem, we proposed two new tissue segmentation algorithms. One is used for real-time 3D ultrasound imaging, which solves the problem that the spine of the same person has different depths at different locations. The other algorithm is used for the 3D visualization process after data acquisition, which is simpler to use without parameter adjustment.




Figure 2


(a) Tissue with imaging results similar to bone. SP, spinous process; TP, transverse process; SAP, superior articular process; IAP, inferior articular process. (b) Projection on the sagittal plane of the raw 3D spine image. D1, D2, and D3 represent the distance from the skin to the bone structure located at the shoulder, chest, and lumbar region.


In the first tissue segmentation algorithm employed in real-time imaging, we utilize the anatomical structure of the human back, as shown in Figure 3 . By calculating the spatial distance between the ultrasound image and the neck where the initial B-mode ultrasound image is located, we dynamically adjust the image’s depth of cut based on the acquired probe localization data. The process of the tissue segmentation algorithm is described in detail as shown in Table 1 .




Figure 3


Schematic of the tissue segmentation algorithm 1. (a) The blue line indicates the probe trajectory, and the red line illustrates the ultrasound image at the corresponding position. (b) Position of the B-mode image.


Table 1

The procedure of algorithm 1








Algorithm 1 Tissue segmentation algorithm used in real-time imaging
Input: The screen shots <SPAN role=presentation tabIndex=0 id=MathJax-Element-1-Frame class=MathJax style="POSITION: relative" data-mathml='Img’>ImgImg
Img
, the length of back <SPAN role=presentation tabIndex=0 id=MathJax-Element-2-Frame class=MathJax style="POSITION: relative" data-mathml='L’>𝐿L
L
, the transformation matrices <SPAN role=presentation tabIndex=0 id=MathJax-Element-3-Frame class=MathJax style="POSITION: relative" data-mathml='TWCam’>𝑇𝐶𝑎𝑚𝑊TWCam
T W C a m
and <SPAN role=presentation tabIndex=0 id=MathJax-Element-4-Frame class=MathJax style="POSITION: relative" data-mathml='TTW’>𝑇𝑊𝑇TTW
T T W
.
Output: The visualization of spinal anatomy.
Step1. Extract the B-mode images <SPAN role=presentation tabIndex=0 id=MathJax-Element-5-Frame class=MathJax style="POSITION: relative" data-mathml='ImgB’>Im𝑔𝐵ImgB
Im g B
from <SPAN role=presentation tabIndex=0 id=MathJax-Element-6-Frame class=MathJax style="POSITION: relative" data-mathml='Img’>ImgImg
Img
.
Step2. Calculate the position of the upper-left pixel of the initial B-mode image in the camera coordinate system <SPAN role=presentation tabIndex=0 id=MathJax-Element-7-Frame class=MathJax style="POSITION: relative" data-mathml='Pc1′>𝑃𝑐1Pc1
P c 1
( <SPAN role=presentation tabIndex=0 id=MathJax-Element-8-Frame class=MathJax style="POSITION: relative" data-mathml='P0′>𝑃0P0
P 0
is the origin coordinate, which we consider as the camera position):
<SPAN role=presentation tabIndex=0 id=MathJax-Element-9-Frame class=MathJax style="POSITION: relative" data-mathml='Pc1=TW1CamTTWP0′>𝑃𝑐1=𝑇𝐶𝑎𝑚𝑊1𝑇𝑊𝑇𝑃0Pc1=TW1CamTTWP0
P c 1 = T W 1 C a m T T W P 0
.
Step3. Calculate the position of the upper-left pixel of the <SPAN role=presentation tabIndex=0 id=MathJax-Element-10-Frame class=MathJax style="POSITION: relative" data-mathml='n’>𝑛n
n
th B-mode image in the camera coordinate system <SPAN role=presentation tabIndex=0 id=MathJax-Element-11-Frame class=MathJax style="POSITION: relative" data-mathml='Pcn’>𝑃𝑐𝑛Pcn
P c n
:
<SPAN role=presentation tabIndex=0 id=MathJax-Element-12-Frame class=MathJax style="POSITION: relative" data-mathml='Pcn=TWnCamTTWP0′>𝑃𝑐𝑛=𝑇𝐶𝑎𝑚𝑊𝑛𝑇𝑊𝑇𝑃0Pcn=TWnCamTTWP0
P c n = T W n C a m T T W P 0
.
Step4. Calculate the positional relationship of the <SPAN role=presentation tabIndex=0 id=MathJax-Element-13-Frame class=MathJax style="POSITION: relative" data-mathml='n’>𝑛n
n
th B-mode image relative to the middle of the back along the x-axis (spine direction) and get the corresponding depth of cut <SPAN role=presentation tabIndex=0 id=MathJax-Element-14-Frame class=MathJax style="POSITION: relative" data-mathml='Cutn’>Cu𝑡𝑛Cutn
Cu t n
:
<SPAN role=presentation tabIndex=0 id=MathJax-Element-15-Frame class=MathJax style="POSITION: relative" data-mathml='Cutn=K|Pc1.x−Pcn.x−L2|+D’>Cu𝑡𝑛=𝐾𝑃𝑐1.𝑥𝑃𝑐𝑛.𝑥𝐿2+𝐷Cutn=K|Pc1.x−Pcn.x−L2|+D
Cu t n = K | P c 1 . x − P c n . x − L 2 | + D
.
Step5. Cut the image <SPAN role=presentation tabIndex=0 id=MathJax-Element-16-Frame class=MathJax style="POSITION: relative" data-mathml='ImgB’>Im𝑔𝐵ImgB
Im g B
using the corresponding depth of cut to obtain the image <SPAN role=presentation tabIndex=0 id=MathJax-Element-17-Frame class=MathJax style="POSITION: relative" data-mathml='ImgC’>Im𝑔𝐶ImgC
Im g C
.
Step6. Reconstruct image <SPAN role=presentation tabIndex=0 id=MathJax-Element-18-Frame class=MathJax style="POSITION: relative" data-mathml='ImgC’>Im𝑔𝐶ImgC
Im g C
using <SPAN role=presentation tabIndex=0 id=MathJax-Element-19-Frame class=MathJax style="POSITION: relative" data-mathml='TWCam’>𝑇𝐶𝑎𝑚𝑊TWCam
T W C a m
and <SPAN role=presentation tabIndex=0 id=MathJax-Element-20-Frame class=MathJax style="POSITION: relative" data-mathml='TTW’>𝑇𝑊𝑇TTW
T T W
:
<SPAN role=presentation tabIndex=0 id=MathJax-Element-21-Frame class=MathJax style="POSITION: relative" data-mathml='Spine=TWCamTTWImgC’>Spine=𝑇𝐶𝑎𝑚𝑊𝑇𝑊𝑇Im𝑔𝐶Spine=TWCamTTWImgC
Spine = T W C a m T T W Im g C
.
Step7. Visualize the spinal anatomy using The Visualization Toolkits (VTK, Kitware, USA).

Only gold members can continue reading. Log In or Register to continue

Stay updated, free articles. Join our Telegram channel

May 10, 2025 | Posted by in ULTRASONOGRAPHY | Comments Off on Real-Time Volumetric Free-Hand Ultrasound Imaging for Large-Sized Organs: A Study of Imaging the Whole Spine

Full access? Get Clinical Tree

Get Clinical Tree app for offline access