Abstract:
The ARS offers tracking, estimation of position, orientation and full articulation of the human body from marker-less visual observations obtained by a camera, for example an RGBD camera. An ARS may provide hypotheses of the 3 D configuration of body parts or the entire body from a single depth frame. The ARS may also propagates estimations of the 3 D configuration of body parts and the body by mapping or comparing data from the previous frame and the current frame. The ARS may further compare the estimations and the hypotheses to provide a solution for the current frame. An ARS may select, merge, refine, and/or otherwise combine data from the estimations and the hypotheses to provide a final estimation corresponding to the 3 D skeletal data and may apply the final estimation data to capture parameters associated with a moving or still body.
Abstract:
The Gesture Recognition Apparatuses, Methods And Systems For Human-machine Interaction ("GRA") discloses vision-based gesture recognition. GRA can be implemented in any application involving tracking, detection and/or recognition of gestures or motion in general. Disclosed methods and systems consider a gestural vocabulary of a predefined number of user specified static and/or dynamic hand gestures that are mapped with a database to convey messages. In one implementation, the disclosed systems and methods support gesture recognition by detecting and tracking body parts, such as arms, hands and fingers, and by performing spatio-temporal segmentation and recognition of the set of predefined gestures, based on data acquired by an RGBD sensor. In one implementation, a model of the hand is employed to detect hand and finger candidates. At a higher level, hand posture models are defined and serve as building blocks to recognize gestures based on the temporal evolution of the detected postures.