Gesture interfaces have long been pursued in the
context of portable computing and immersive
environments. However, such interfaces have been
difficult to realize, in part due to the lack of
frameworks for their design and implementation.
Analogical to a game engine used to build computer
games, a gesture interface engine is a framework
which can be used to build gesture interfaces
systematically, conveniently, and efficiently.
Rather than using a low-level high-dimensional joint
angle space, we describe and recognize handposes in
a lexical space, in which each handpose is
decomposed into elements in a finger state alphabet.
A handpose is defined by finger spelling, i.e. by
specifying the pose of each finger and the
interrelation between any two fingers. Dynamic
gestures are defined as handpose transformation,
hand translation, and rotation. The alphabet and the
underlying grammar, designed by the authors, form a
simple but expressive gesture notation system called
GeLex.
context of portable computing and immersive
environments. However, such interfaces have been
difficult to realize, in part due to the lack of
frameworks for their design and implementation.
Analogical to a game engine used to build computer
games, a gesture interface engine is a framework
which can be used to build gesture interfaces
systematically, conveniently, and efficiently.
Rather than using a low-level high-dimensional joint
angle space, we describe and recognize handposes in
a lexical space, in which each handpose is
decomposed into elements in a finger state alphabet.
A handpose is defined by finger spelling, i.e. by
specifying the pose of each finger and the
interrelation between any two fingers. Dynamic
gestures are defined as handpose transformation,
hand translation, and rotation. The alphabet and the
underlying grammar, designed by the authors, form a
simple but expressive gesture notation system called
GeLex.