All projects will be conducted in the well-equipped Visualization Lab. You will work closely with a Ph.D. student or a Post-Doc associate. Future RA support is available for good students for all the projects.
Real walking offers higher immersive presence for virtual reality (VR) applications than alternative locomotive means such as walking-in-place and external control gadgets, but needs to take into consideration different room sizes, wall shapes, and surrounding objects in the virtual and real worlds. Despite perceptual study of impossible spaces and redirected walking, there are no general methods to match a given pair of virtual and real scenes.
We propose a system to match a given pair of virtual and physical worlds for immersive VR navigation. We first compute a planar map between the virtual and physical floor plans that minimizes angular and distal distortions while conforming to the virtual environment goals and physical environment constraints. Our key idea is to design maps that are globally surjective to allow proper folding of large virtual scenes into smaller real scenes but locally injective to avoid locomotion ambiguity and intersecting virtual objects. From these maps we derive altered rendering to guide user navigation within the physical environment while retaining visual fidelity to the virtual environment. Our key idea is to properly warp the virtual world appearance into real world geometry with sufficient quality and performance. We evaluate our method through a formative user study, and demonstrate applications in gaming, architecture walkthrough, and medical imaging.
Plan and requirements: We are planning to submit a SIGGRAPH 2017 emerging technologies live demo (link) based on our previous technical paper . Students are expected to have sufficient background in C++/OpenGL. Experience in 3D modeling software (Unity/Blender etc.) or motion capturing hardware/software is a plus.
 Sun, Q., Wei, L.Y. and Kaufman, A., 2016. Mapping virtual and physical reality. ACM Transactions on Graphics (TOG), 35(4), p.64. (PDF)
Accurate segmentation of abdominal organs from medical images is an essential part of the surgical planning and computer-aided disease diagnosis. Many existing algorithms are specialized for the segmentation of healthy organs. Cystic pancreas segmentation is especially challenging due to its low contrast boundaries, variability in shape, location and the stage of pancreatic cancer. For this project, which is a continuation of our recent work , students will be developing an algorithm for automatic segmentation of the pancreas and pancreatic masses from abdominal CT scans.
Requirements: Strong background in C++ and Python, experience in image processing and machine learing.
 Dmitriev, K., Gutenko, I., Nadeem, S. and Kaufman, A., 2016, March. Pancreas and cyst segmentation. In SPIE Medical Imaging (pp. 97842C-97842C). International Society for Optics and Photonics. (PDF)
Pancreatic cancer has become the third leading cause of cancer-repleted mortality in the United States in 2015. A significant fraction of cases of pancreatic cancer is thought to originate from curable cystic precancerous lesions. These cysts can be detected during a CT screening. The main task of this project is to develop a cyst (already pre-segmented) classification algorithm, using methods from image processing, computer vision and machine learning.
Requirements: Strong background in C++ and Python, experience image processing and machine learning.
Visualizing on your everyday desktop screen means the reduction of a multi-dimensional data you wish to visualize on to a 2 dimensional display system. The development of new hardware systems such as 3D displays, 3D projectors, VR and AR head-mounted displays aim to compliment this limitation. However, these hardware pose limitations of their own: purchasing cost, nausea, limited field of view are to name but a few.
AR headsets, in particular, pose two major challenges, limited field of view and low opacity, rather translucent, quality of the augmented objects. The aim of this project is to tackle these two obstacles, in combination with a standard display screen, to produce a 3D interactive desktop. The main idea is to develop a system where when a user is visualizing a multi dimensional data, for the sake of simplicity lets assume a 3D cube, the object information is communicated to the AR headset (Microsoft Hololens for example). The headset then identifies and locates the object on the screen and places the augmented reconstruction of the object at its exact position. The user may then interact with the object based on the headset's interactive commands.
The major points of this project are:
Virtual colonoscopy (VC) is a non-invasive screening technique for colorectal cancer. A patient undergoes a CT scan, and the colon is digitally cleansed and reconstructed from the CT images. A radiologist can then virtually navigate through this reconstructed colon looking for polyps, the precursor of cancer. For improved visual appearance, volume rendering through the CT data is preferred over rendering a triangular mesh, yielding a smoother and more accurate view of the colon surface.
Currently, VC systems are displayed on a conventional desktop screen. Our current work is to advance VC into immersive environments, developing an immersive VC (iVC), allowing for the user to experience greater field-of-view and field-of-regard, which should lead to increased accuracy and decreased interrogation time. To accomplish this, high resolution imagery must be generated at high frame rates, in stereo vision, to provide smooth motion when flying through the colon. To this end, we seek to enhance the speed of the volume rendering. Using Omegalib, we are developing a hybrid visualization framework to perform fast volume rendering with mesh assisted empty space skipping.
The major points of this project are:
The Reality Deck is the world's first immersive gigapixel display. Comprised of 416 LCD panels, it offers a combined resolution of more than 1.5 billion pixels. These displays are driven by a cluster of 18 render nodes with dual hex-core CPUs and 4 GPUs. Developing applications for distributed immersive systems is a complex and involved process. At the very least, any application input needs to be captured centrally so that the application state can be consistent between nodes. However, Omegalib and other VR libraries simplify this process provide us with an abstract interface in C++/Python for implementing applications.
Most hierarchical segmentation methods have been developed by the computer vision community for 2D images of the real world. These methods are used to identify homogeneous regions (like super-pixels) in an image that make up higher level features. Such methods can be used for visual exploration of volumetric data by identifying features as a collection of these homogeneous regions.
This project entails extending these methods to 3D data like CT or MRI scans, and evaluating their effectiveness in identifying 3D features. This will include identifying popular and state-of-the-art methods by reading the literature, implementing these methods if an implementation is not readily available, and evaluating their effectiveness on 3D data using different parameter settings and different sources of data.
This project requires proficiency in C++, Qt, openGL, and writing reliable bug-free code. He/she should be willing to read and survey latest papers on this topic and implement multiple techniques to achieve the above mentioned goals.
Colon cancer is the third most deadliest cancer in US and claims more than 750,000 lives every year, around the world. Optical colonoscopy is the most prevalent screening tool for colon cancer with more than 15 million colonoscopies performed every year, just in US. The biggest problem with this modality is that no tools/techniques exist, currently, to extract useful information from these colon inspection videos. This makes life difficult for the endoscopists while documenting the procedure and following-up on earlier cases (due to recall bias). For this project, students will be using deep learning approaches along with tools such as Spark, etc. to come up with geometric feature (depth, normals) extraction and anomaly detection (polyps, therapeutics, etc.) techniques for optical colonoscopy videos.
Requirements: Background in computer vision and deep learning frameworks such as Torch, TensorFlow, or Caffe will be mandatory for this project.
 Nadeem, Saad, and Arie Kaufman. "Computer-aided detection of polyps in optical colonoscopy images." SPIE Medical Imaging. International Society for Optics and Photonics, 2016.
Virtual Colonoscopy (VC), also known as CT Colonography (CTC), employs CT scanning and advanced visualization technologies to evaluate the entire colon for colorectal polyps, the precursor of cancer. VC has entered a new era, where it is now widely recognized as a highly sensitive and specific test for identifying polyps in the colon, more than 2 million of patients have been screened during last year. One of the limitations of VC is the reading time, which typically takes between 30-60 minutes per patient.
Various CAD software has been developed to help doctors in finding polyps and suspicious area, however as of now no CAD software proved to be beneficial in practical use, this is mainly due to high rate of false positive in such software.
We recently conducted a preliminary study  to leverage the crowd to detect polyp and polyp-free (benign) segments for a given VC dataset with sensitivity and specificity comparable to the radiologists. The main goal of this project is to improve the current interface, which includes allowing users to mark the location of polyps in video segments and logging user behavior.
 Ji Hwan Park, Saad Nadeem, Seyedkoosha Mirhosseini, and Arie Kaufman. C2A: Crowd Consensus Analytics for Virtual Colonoscopy. IEEE VAST Conference 2016.