Abstract:
A method of using stereo vision to interface with a computer is provided. The method includes capturing a stereo image, and processing the stereo image to determine position information of an object in the stereo image. The object is controlled by a user. The method also includes communicating the position information to the computer to allow the user to interact with a computer application.
Abstract:
A multiple camera tracking system for interfacing with an application program running on a computer is provided. The tracking system includes two or more video cameras arranged to provide different viewpoints of a region of interest, and are operable to produce a series of video images. A processor is operable to receive the series of video images and detect objects appearing in the region of interest. The processor executes a process to generate a background data set from the video images, generate an image data set for each received video image, compare each image data set to the background data set to produce a difference map for each image data set, detect a relative position of an object of interest within each difference map, and produce an absolute position of the object of interest from the relative positions of the object of interest and map the absolute position to a position indicator associated with the application program.
Abstract:
A camera tracker, in which an image captured by a camera oriented to capture images across a surface is accessed. A region in which an object detected within the accessed image is positioned is determined from among multiple defined regions within a field of view of the camera. User input is determined based on the determined region and an application is controlled based on the determined user input.
Abstract:
One or more elements are initially displayed on a display component of an electronic device. After the one or more elements have been displayed on the display component of the electronic device, an image of a user of the electronic device is captured, and an orientation of the electronic device relative to the user is determined based on the captured image of the user of the electronic device. Thereafter, an orientation of at least one of the displayed elements is adjusted relative to the display component of the electronic device based on the determined orientation of the electronic device relative to the user.
Abstract:
An enhanced control, in which a guide line is defined relative to an object in a user interface, items aligned with the guide line are displayed without obscuring the object. A selected item is output based on receiving a selection of one of the displayed items.
Abstract:
Techniques are disclosed for determining a user's motion in relation to displayed images. According to one general aspect, a first captured image is accessed. The first captured image includes (1) a first displayed image produced at a first point in time, and (2) a user. A second captured image is accessed. The second captured image includes (1) a second displayed image produced at a second point in time, and (2) the user. First information indicating motion associated with one or more objects in the first and second displayed images is accessed. Second information indicating both motion of the user and the motion associated with the one or more objects in the first and second displayed images is determined.
Abstract:
An element is initially displayed on an interactive touch-screen display device with an initial orientation relative to the interactive touch-screen display device. One or more images of a user of the interactive touch-screen display device are captured. The user is determined to be interacting with the element displayed on the interactive touch-screen display device. In addition, an orientation of the user relative to the interactive touch-screen display device is determined based on at least one captured image of the user of the interactive touch-screen display device. Thereafter, in response to determining that the user is interacting with the displayed element, the initial orientation of the displayed element relative to the interactive touch-screen display device is automatically adjusted based on the determined orientation of the user relative to the interactive touch-screen display device.
Abstract:
The detection of motion of a user via a camera and the generation of a dynamic virtual representation of a user on a display, where the user's detected motion causes the dynamic virtual representation to interact with virtual objects on the display. The magnitude and direction of the user's detected motion is calculated to determine the magnitude and direction of a force applied by the dynamic virtual representation to the virtual object. Further arrangements include water or smoke fluid simulations, in order to enhance the user experience.
Abstract:
An electronic media device may be controlled based on personalized media preferences of users experiencing content using the electronic media device. Users experiencing content using the electronic media device may be automatically identified and the electronic media device may be automatically controlled based on media preferences associated with the identified users.
Abstract:
An enhanced interface for voice and video communications, in which a gesture of a user is recognized from a sequence of camera images, and a user interface is provided include a control and a representation of the user. The process also includes causing the representation to interact with the control based on the recognized gesture, and controlling a telecommunication session based on the interaction.