projects people publications datasets

Multiple Cue Tracking

Deformable models are uniquely suitable to non-rigid tracking both in 2D and in 3D. We developed an articulated dynamic hand model driven by multiple cues including an extended optical flow constraint which permits tracking of a number of different hand motions with shading changes, rotations and occlusions of significant parts of the hand.

We have been working on the integration of multiple cues for the tracking of deformable shapes using a probabilistic framework in a collaboration with researchers in the Catalan Polytechnic University. Appearance and geometric object features can be integrated without assuming that they are independent (which can be very limiting) through a cascaded particle filter framework. Using a novel colorspace feature our system adapts online and simultaneously the colorspace where the image points are represented, the color distributions, and the deformable contour of the object. We plan to extend this framework to take into account motion and 3D shape cues.

Integration of Conditionally Dependent Object Features for Robust Figure/Background Segmentation

[publications]

We propose a new technique for fusing multiple cues to robustly segment an object from the background in video sequences that suffer from abrupt changes of both illumination and position of the target. Robustness is achieved by the integration of appearance and geometric object features and by their description using particle filters. Previous approaches assume independence of the object cues or apply the particle filter formulation to only one of the features, and assume a smooth change in the rest, which can prove is very limiting, especially when the state of some features needs to be updated using other cues or when their dynamics follow non-linear and unpredictable paths. Our technique offers a general framework to model the probabilistic relationship between features. The proposed method is analytically justified and applied to develop a robust tracking system that adapts online and simultaneously the colorspace where the image points are represented, the color distributions, and the contour of the object. Results with synthetic data and real video sequences demonstrate the robustness and versatility of our method.

Publications

Using Multiple Cues for Hand Tracking and Model Refinement

[publications]

We present a model based approach to the integration of multiple cues for tracking high degree of freedom articulated motions and model refinement. We then apply it to the problem of hand tracking using a single camera sequence. Hand tracking is particularly challenging because of occlusions, shading variations, and the high dimensionality of the motion. The novelty of our approach is in the combination of multiple sources of information which come from edges, optical flow and shading information in order to refine the model during tracking. We first use a previously formulated generalized version of the gradient-based optical flow constraint, that includes shading flow i.e., the variation of the shading of the object as it rotates with respect to the light source. Using this model we track its complex articulated motion in the presence of shading changes. We use a forward recursive dynamic model to track the motion in response to data derived 3D forces applied to the model. However, due to inaccurate initial shape the generalized optical flow constraint is violated. In this paper we use the error in the generalized optical flow equation to compute generalized forces that correct the model shape at each step. The effectiveness of our approach is demonstrated with experiments on a number of different hand motions with shading changes, rotations and occlusions of significant parts of the hand.

Publications