The role of mobile ubiquitous computing has grown significantly over the past few years. Current mobile devices such as smartphones or tablets have a number of unique characteristics that make them suitable platforms for medical applications. Their configuration, computing capability, display quality and resolution are comparable to desktop counterparts available few years ago. Yet their portability and always-on connectivity allows a medical doctor or health care provider to conduct the diagnostic process and follow up without being constrained to the workstation computer in the hospital facility.
We introduce a pipeline for medical visualization of Computed Tomography (CT) and Magnetic Resonance Imaging (MRI) data that is uniform for a variety of applications, such as CT angiography, virtual colonoscopy, and brain imaging. In our work we concentrate on two main architectures for volumetric rendering of medical data: rendering of the data fully on the mobile device, when the data is already transmitted to the device, and a thin-client architecture, where the entire data resides on the remote server and the image is rendered on it and then streamed to the client mobile device.
The need for maintaining a system with dual rendering pipelines arises from the nature of the applications. While the mobile market offers a variety of devices that could potentially launch a rocket to the moon several decades ago, for medical volume rendering performance and wireless bandwidth can still be of concern. Here we have to consider the size of the medical imaging data that is produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of CT Angiography (CTA) data reaches 512^3 and spanning over the time domain while capturing the heart beating. At 16-bit precision from a 64-slice CT scanner, given 10 snapshots of the images, a single data set can easily reach 2.5 gigabytes in size and up to 6 gigabytes for a 320-slice CT scanner. This explosion in data size makes data transfers to the mobile device unrealistic, and even if that were achieved, the rendering performance of the device would remain a bottleneck.
In our system we maintain the mobile device rendering for processing of smaller data sets under the conditions of low connectivity or complete absence of a network access. In the thin-client architecture model we utilize the display and interaction capabilities of a mobile device, while performing the interactive volume rendering on servers that are capable of handling large datasets. Upon the user's request the volume is rendered on the server and encoded into an H.264 video stream. We have chosen this format as it is widely supported on mobile devices and depending on the profile it can even be hardware accelerated, which allows for faster compression on the server and lowers the power requirements on the mobile device while still allowing for higher resolution video to be streamed. The choice of low-latency CPU- and GPU-based encoders is particularly important for our system to accommodate interactivity.
As mobile devices have been establishing new ways of interaction, we explore and develop 3D User Interfaces for interacting with the volume rendered visualization. These include touch-based interaction for improved exploration of the data.
In this work we describe the scheme of our implementation and compare the results of two approaches to volumetric rendering on a mobile device. We also demonstrate an application of work to CTA visualization on a commodity tablet device.
Volumetric rendering (ray casting, 3D texture slicing, Phong shading, lighting, 1D and 2D transfer functions, touch navigation.
C++, OpenGL, Qt
Microsoft Surface Pro
Kaloian Petkov, Charilaos Papadopoulos, Siddhesh Shirsat,Xin Zhao, Ji Hwan Park, Arie Kaufman, Ronald Cha