Presented at Computer Vision and Pattern Recognition (CVPR) 2013
Kiwon Yun, Yifan Peng, Dimitris Samaras, Gregory J. Zelinsky, Tamara L. Berg
Stony Brook University
We posit that user behavior during natural viewing of images contains an abundance of information about the content of images as well as information related to user intent and user defined content importance. In this paper, we conduct experiments to better understand the relationship between images, the eye movements people make while viewing images, and how people construct natural language to describe images. We explore these relationships in the context of two commonly used computer vision datasets. We then further relate human cues with outputs of current visual recognition systems and demonstrate prototype applications for gaze-enabled detection and annotation.
Introduction
User behavior while freely viewing images contains an abundance of information about uer intent and depicted scene content.
Humans can provide:
Computer vision recognition algorithms can provide:
We conduct several experiments to better understnad the relationship between gaze, description, and image content. From these exploratory analyses, we build prototype applications for gaze-enabled object detection and annotation.
SBU Gaze-Detection-Description Dataset
This work was supported in part by NSF Awards IIS-1161876, IIS-1054133, IIS-1111047, IIS- 0959979 and the SUBSAMPLE Project of the DIGITEO Institute, France. We thank J. Maxfield, Hossein Adeli and J. Weiss for data pre-processing and useful discussions.