Most of the computer-vision-based hand gesture
recognition systems are either confined to a fixed
set of static gestures or only able to track 2D
global hand motion.
In order to recognize natural hand gestures such as
those in American sign language, we need to track
articulated hand motion in real time. The task is
challenging due to the high degrees of freedom of the
hand, self-occlusion, variable views, and lighting.
This book focuses on automatic recovery of 3D hand
motion from one or more views.
The problem of hand tracking is formulated as
Bayesian filtering in the framework of
analysis-by-synthesis. We propose an Eigen Dynamic
Analysis model and a new feature called likelihood edge.
To automatically initialize and recover from
loss-track, we proposed a bottom-up posture
recognition algorithm. It collectively matches the
local features in a single image with those in the
image database. Through quantitative and visual
experimental results, we demonstrate the
effectiveness of our approach and point out its
limitations.
recognition systems are either confined to a fixed
set of static gestures or only able to track 2D
global hand motion.
In order to recognize natural hand gestures such as
those in American sign language, we need to track
articulated hand motion in real time. The task is
challenging due to the high degrees of freedom of the
hand, self-occlusion, variable views, and lighting.
This book focuses on automatic recovery of 3D hand
motion from one or more views.
The problem of hand tracking is formulated as
Bayesian filtering in the framework of
analysis-by-synthesis. We propose an Eigen Dynamic
Analysis model and a new feature called likelihood edge.
To automatically initialize and recover from
loss-track, we proposed a bottom-up posture
recognition algorithm. It collectively matches the
local features in a single image with those in the
image database. Through quantitative and visual
experimental results, we demonstrate the
effectiveness of our approach and point out its
limitations.