Abstract:
PROBLEM TO BE SOLVED: To provide a method, apparatus and system for model-based playfield registration. SOLUTION: An input video image is processed. The processing of the video image includes extracting key points related to the video image. Furthermore, whether enough key points related to the video image have been extracted is determined, and a direct estimation of the video image is performed when enough key points have been extracted and then, a homograph matrix of a final video image based on the direct estimation is generated. COPYRIGHT: (C)2011,JPO&INPIT
Abstract:
A video communication system that replaces actual live images of the participating users with animated avatars. A method may include selecting an avatar, initiating communication, capturing an image, detecting a face in the image, extracting features from the face, converting the facial features to avatar parameters, and transmitting at least one of the avatar selection or avatar parameters. The system is configured to allow a user to select a new avatar during active communication with a remote user.
Abstract:
In response to a gestural command, a video currently being watched can be identified by extracting at least one decoded frame from a television transmission. The frame can be transmitted to a separate mobile device for requesting an image search and for receiving the search results. The search results can be used to obtain more information. The user's social networking friends can also be contacted to obtain more information about the clip.
Abstract:
Beschrieben wird ein Mechanismus zur Erleichterung der Echtzeit-Multi View-Detektion von Objekten in Multi Camera-Umgebungen gemäß einer Ausführungsform. Ein Verfahren von Ausführungsformen, wie hier beschrieben, umfasst: Mappen erster Linien, die mit Objekten assoziiert sind, auf eine Standebene; und Bilden von Clustern zweiter Linien, die den ersten Linien entsprechen, derart, dass ein Schnittpunkt in einem Cluster eine Position eines Objekts auf der Standebene darstellt.
Abstract:
Ausführungsformen sind allgemein auf Verfahren und Einrichtungen zum Bestimmen einer Vorderkörperorientierung ausgerichtet. Eine Ausführungsform eines Verfahrens zum Bestimmen einer dreidimensionalen (3D) Orientierung eines Vorderkörpers eines Spielers umfasst Folgendes: Detektieren jedes mehrerer Spieler in jedem mehrerer Frames, die durch mehrere Kameras aufgenommen werden; für jede der mehreren Kameras, Verfolgen jedes der mehreren Spieler zwischen kontinuierlichen Frames, die durch die Kamera aufgenommen werden; und Assoziieren der mehreren Frames, die durch die mehreren Kameras aufgenommen werden, um die 3D-Orientierung jedes der mehreren Spieler zu erzeugen.
Abstract:
Video analysis may be used to determine who is watching television and their level of interest in the current programming. Lists of favorite programs may be derived for each of a plurality of viewers of programming on the same television receiver.
Abstract:
Apparatuses, methods and storage medium associated with animating and rendering an avatar are disclosed herein. In embodiments, In embodiments, an apparatus may include an avatar animation engine configured to receive a plurality of facial motion parameters and a plurality of head gestures parameters, respectively associated with a face and a head of a user. The plurality of facial motion parameters may depict facial action movements of the face, and the plurality of head gesture parameters may depict head pose gestures of the head. Further, the avatar animation engine may be configured to drive an avatar model with facial and skeleton animations to animate an avatar, using the facial motion parameters and the head gestures parameters, to replicate a facial expression of the user on the avatar that includes impact of head post rotation of the user. Other embodiments may be described and/or claimed.