Copyright © 2011 NVIDIA Corporation and Wen-mei W. Hwu. All rights reserved.
Introduction
The State of GPU Computing in Medical Imaging
The use of GPU computing in medical imaging has exploded over the last few years. Early uses centered on using the texture-mapping capabilities of GPUs to do volume visualization of the 3-D datasets routinely created by modern medical imaging equipment. As people gained more experience with GPU computing and as GPUs became more capable, activity shifted to more sophisticated postprocessing techniques to better support medical diagnosis and research. Many of the visualization and image-processing techniques utilized in medical imaging, such as registration, segmentation, and classification, share methodologies with other disciplines, such as computer vision, and there is a significant amount of cross-pollination between those communities.
A flurry of recent activity has been in using GPU computing to do the initial reconstruction of measurement data into three-dimensional volume sets suitable for visualization and further processing. Before the advent of GPU computing, image reconstruction was done using digital signal and vector processors (DSPs), application-specific integrated circuits (ASICs), field-programmable gate arrays (FPGAs), and so on. The cost-effectiveness, speed, and programmability of GPU computing is a major driving force in this shift away from these more expensive, less flexible technologies towards image reconstruction on GPUs.
The processing techniques of medical imaging often include repetitive calculations done on large multidimensional arrays of data that are highly suited to the parallel-processing capabilities of GPUs. In fact, the speed and power of GPU computing has facilitated the clinical use of algorithms that previously were relegated to research laboratories due to their immense computational load. The principal challenge to using GPU computing in medical imaging is managing the massive amounts of data involved, which often overwhelms the memory capacity of existing GPU-based compute engines.
In This Section
The first four chapters deal with tomographic image reconstruction, where the volumetric data is reconstructed from projection data collected by the measurement instrument at various angles around the object being scanned (i.e., the patient). Measurement instruments that utilize tomographic reconstruction include X-ray CT (computed tomography) where the projection data are attenuated X-ray beams passed through the object, and PET (positron emission tomography) where the projection data are decay events from radioactive isotopes
Chapter 40 , written by Dana Schaa, Benjamin Brown, Byunghyun Jang, Perhaad Mistry, Rodrigo Dominguez, David Kaeli, and Richard Moore, describes the 3-D reconstruction of breast tissue using conventional X-ray images taken from a limited number of angles. This chapter provides a nice high-level description of the iterative reconstruction technique.
Chapter 41 , written by Abderrahim Benquassmi, Eric Fontaine, Hsien-Hsin, and S. Lee, continues the discussion of iterative reconstruction techniques for improving the quality of CT reconstruction from dedicated CT measurement instruments.
Chapter 42 , written by Guillem Pratx, Jing-Yu Cui, Sven Prevrhal, and Craig S. Levin, adds a new twist to the iterative reconstruction story by introducing methods for managing reconstruction of data generated from random events, such as the positron decay events measured in PET imaging. In this case, the projection rays are randomly distributed instead of regular.
Chapter 43 , written by Wei Xu and Klaus Mueller, considers how to optimally select the parameter settings for reconstruction. In essence, these parameter settings drive algorithms such as those introduced in the first three chapters.
The next two chapters deal with reconstruction of magnetic resonance (MR) images, where data collected in the frequency space is transformed into spatial data.
Chapter 44 , written by Yue Zhuo, Xiao-Long Wu, Justin P. Haldar, Thibault Marin, Wen-mei Hwu, Zhi-Pei Liang, and Bradley P. Sutton, describes how GPU computing has allowed them to improve the quality of MR reconstructions by correcting for distortions introduced by the measurement process, while still keeping reconstruction speeds at a clinically acceptable level.
Chapter 45 , written by Mark Murphy and Miki Lustig, discusses methods for improving the scanning and reconstruction speed in creating MR images, making MR imaging more useful in pediatric medicine, where the restlessness of the young patients often inhibits its use.
The following three chapters describe the use of GPU computing to manipulate images after they have been collected and reconstructed.
Seamlessly adding GPU Computing capabilities to the popular Insight Tool Kit (ITK) is the subject of Chapter 46 , written by Won-Ki Jeong, Hanspeter Pfister, and Massimiliano Fatica. Because ITK is used in many disciplines to segment, register, and massage image data, the methods introduced by this chapter have applicability well beyond medical imaging.
Chapter 47 , written by James Shackleford, Nagarajan Kandasamy, and Gregory Sharp, details how they accelerated a B-spline-based deformable registration algorithm through GPU computing.
Many algorithms for processing medical images depend on well-constructed atlases. In one sense, atlases provide generalized reference images against which other images are compared. Chapter 48 , written by Linh Ha, Jens Kruger, Claudio Silva, and Sarang Joshi, discusses how GPU computing can be used to create such atlases
Chapter 49 , written by Won-Ki Jeong, Hanspeter Pfister, Johanna Beyer, and Markus Hadwiger, combines multiple techniques to help solve the tricky problem of tracking and reconstructing 3-D neural circuits (axons) in electron microscope images of brain tissue. From the standpoint of GPU computing, this is a particularly interesting problem in that such pathways are somewhat random, making their reconstruction difficult to parallelize for computation on a GPU. In addition, the electron microscope datasets are extremely large and difficult to manage on a GPU.
Unlike other chapters in this section, Chapter 50 , written by Andreu Badal and Aldo Badano does not deal with actual image data, but describes how to simulate the image acquisition process. Although such simulations are primarily used by researchers trying to improve the image acquisition process, they potentially could be used as part of clinical image processing, for example, in the iterative reconstruction algorithms introduced in the beginning chapters of this section.
..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset
18.219.4.174