We introduce a novel remote volume rendering pipeline for medical visualization targeted for mHealth (mobile health) applications. The necessity of such a pipeline stems from the large size of the medical imaging data produced by current CT and MRI scanners with respect to the complexity of the volumetric rendering algorithms. For example, the resolution of typical CT Angiography (CTA) data easily reaches 512^3 voxels and can exceed 6 gigabytes in size by spanning over the time domain while capturing a beating heart. This explosion in data size makes data transfers to mobile devices challenging, and even when the transfer problem is resolved the rendering performance of the device still remains a bottleneck. To deal with this issue, we propose a thin-client architecture, where the entirety of the data resides on a remote server where the image is rendered and then streamed to the client mobile device. We utilize the display and interaction capabilities of the mobile device, while performing interactive volume rendering on a server capable of handling large datasets. Specifically, upon user interaction the volume is rendered on the server and encoded into an H.264 video stream. H.264 is ubiquitously hardware accelerated, resulting in faster compression and lower power requirements. The choice of low-latency CPU- and GPU-based encoders is particularly important in enabling the interactive nature of our system. We demonstrate a prototype of our framework using various medical datasets on commodity tablet devices.
C++, OpenGL, Qt, LIVE-555, NVCUVENC/NVENCE
Server: Dell Precision T7600 workstations with dual 6-core CPUs, 64GB of memory, NVIDIA Quadro K5000 GPU
- Samsung 700T (2013)
- Microsoft Surface Pro (2013)
- IBM ThinkPad X41 (almost 10 years old!)
This work was done in collaboration with Kaloian Petkov, Charilaos Papadopolous, as well as Xin Zhao, and Ji Hwan Park, under supervision of professor Arie Kaufman and Ronald Cha (Samsung).