Abstract:
Disclosed is a unique system and method that facilitates gesture-based interaction with a user interface. The system involves an object sensing configured to include a sensing plane vertically or horizontally located between at least two imaging components on one side and a user on the other. The imaging components can acquire input images taken of a view of and through the sensing plane. The images can include objects which are on the sensing plane and/or in the background scene as well as the user as he interacts with the sensing plane. By processing the input images, one output image can be returned which shows the user objects that are in contact with the plane. Thus, objects located at a particular depth can be readily determined. Any other objects located beyond can be “removed” and not seen in the output image.
Abstract:
The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs.
Abstract:
Output of a computer system is manipulated using a physical object disposed adjacent to an interactive display surface. A painting application produces an image in response to an object disposed adjacent to the interactive display surface. During each of a plurality of capture intervals, a set of points corresponding to the object is detected when the object is disposed adjacent to the interactive display surface. An image is projected onto the interactive display surface representing the set of points and is filled with a color or pattern. As successive sets of points are accumulated during each of a plurality of capture intervals, a composite image is displayed. An object can thus be used, for example, to “draw,” “paint,” or “stamp” images on the display surface. These images manifest characteristics of the object and its interaction and movement relative to the interactive display surface in a realistic manner.
Abstract:
The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs.
Abstract:
An interactive table has a display surface on which a physical object is disposed. A camera within the interactive table responds to infrared (IR) light reflected from the physical object enabling a location of the physical object on the display surface to be determined, so that the physical object appear part of a virtual environment displayed thereon. The physical object can be passive or active. An active object performs an active function, e.g., it can be self-propelled to move about on the display surface, or emit light or sound, or vibrate. The active object can be controlled by a user or the processor. The interactive table can project an image through a physical object on the display surface so the image appears part of the object. A virtual entity is preferably displayed at a position (and a size) to avoid visually interference with any physical object on the display surface.
Abstract:
A patterned object that is placed on or adjacent to a display surface of an interactive display is detected by matching an image produced using infrared light reflected from the patterned object with one of a set of templates associated with the patterned object. The templates are created for each of a plurality of incremental rotations of the patterned object on a display surface. To implement the comparison, a sum of template data value corresponding to the intensities of the reflected light is calculated for the image of the patterned object and for each of the templates. These sums are compared to determine a rotated template that matches the patterned object within a predefined threshold, thus determining that the patterned object has been placed on or near the display surface.
Abstract:
A position of a three-dimensional (3D) object relative to a display surface of an interactive display system is detected based upon the intensity of infrared (IR) light reflected from the object and received by an IR video camera disposed under the display surface. As the object approaches the display surface, a “hover” connected component is defined by pixels in the image produced by the IR video camera that have an intensity greater than a predefined hover threshold and are immediately adjacent to another pixel also having an intensity greater than the hover threshold. When the object contacts the display surface, a “touch” connected component is defined by pixels in the image having an intensity greater than a touch threshold, which is greater than the hover threshold. Connected components determined for an object at different heights above the surface are associated with a common label if their bounding areas overlap.
Abstract:
Text objects having a primary data portion in which is stored text characters and associated encoding information, and an annotation portion in which is stored attribute information such as style and language identifiers, is described. The encoding information is stored within a run header in the primary data portion and both the run header and attribute header refer to the text characters to thereby define a text run. Also described are operations for manipulating the text objects of the invention and for creating and deleting annotations. The operations for manipulating the text objects of the invention include installing text within a text object, copying text in a text object, replacing text in a text object, writing text in a text object and imaging text in a text object for display.
Abstract:
A shared ride system comprising a SIPV further comprising at least an integrated safety device wherein the safety device is a coupled safety helmet.