Abstract:
A directed flux motor described utilizes the directed magnetic flux of at least one magnet through ferrous material to drive different planetary gear sets to achieve capabilities in six actuated shafts that are grouped three to a side of the motor. The flux motor also utilizes an interwoven magnet configuration which reduces the overall size of the motor. The motor allows for simple changes to modify the torque to speed ratio of the gearing contained within the motor as well as simple configurations for any number of output shafts up to six. The changes allow for improved manufacturability and reliability within the design.
Abstract:
Techniques for auto-generating the target's visual representation may reduce or eliminate the manual input required for the generation of the target's visual representation. For example, a system having a capture device may detect various features of a user in the physical space and make feature selections from a library of visual representation feature options based on the detected features. The system can automatically apply the selections to the visual representation of the user based on the detected features. Alternately, the system may make selections that narrow the number of options for features from which the user chooses. The system may apply the selections to the user in real time as well as make updates to the features selected and applied to the target's visual representation in real time.
Abstract:
Techniques may comprise identifying surfaces, textures, and object dimensions from unorganized point clouds derived from a capture device, such as a depth sensing device. Employing target digitization may comprise surface extraction, identifying points in a point cloud, labeling surfaces, computing object properties, tracking changes in object properties over time, and increasing confidence in the object boundaries and identity as additional frames are captured. If the point cloud data includes an object, a model of the object may be generated. Feedback of the model associated with a particular object may be generated and provided real time to the user. Further, the model of the object may be tracked in response to any movement of the object in the physical space such that the model may be adjusted to mimic changes or movement of the object, or increase the fidelity of the target's characteristics.
Abstract:
The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs.
Abstract:
The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs.
Abstract:
Described herein is an apparatus that includes a curved display surface that has an interior and an exterior. The curved display surface is configured to display images thereon. The apparatus also includes an emitter that emits light through the interior of the curved display surface. A detector component analyzes light reflected from the curved display surface to detect a position on the curved display surface where a first member is in physical contact with the exterior of the curved display surface.
Abstract:
A system and process for selecting objects in an ubiquitous computing environment where various electronic devices are controlled by a computer via a network connection and the objects are selected by a user pointing to them with a wireless RF pointer. By a combination of electronic sensors onboard the pointer and external calibrated cameras, a host computer equipped with an RF transceiver decodes the orientation sensor values transmitted to it by the pointer and computes the orientation and 3D position of the pointer. This information, along with a model defining the locations of each object in the environment that is associated with a controllable electronic component, is used to determine what object a user is pointing at so as to select that object for further control actions.
Abstract:
A unique system and method is provided that facilitates pixel-accurate targeting with respect to multi-touch sensitive displays when selecting or viewing content with a cursor. In particular, the system and method can track dual inputs from a primary finger and a secondary finger, for example. The primary finger can control movement of the cursor while the secondary finger can adjust a control-display ratio of the screen. As a result, cursor steering and selection of an assistance mode can be performed at about the same time or concurrently. In addition, the system and method can stabilize a cursor position at a top middle point of a user's finger in order to mitigate clicking errors when making a selection.
Abstract:
A web-based hosted solution through which application developers create, manage and monitor application usage analytics in an online manner. Preferably, an application under test is one of: application software, a script-enabled web application, or a rich Internet application (RIA). During the development process, a usage monitoring API is integrated into the application and the application is deployed. As users interact with the application, a log file is generated, typically in one of two ways. If the application is able to write to a local file system (in the user's machine), usage information is gathered in a log file local to the deployed application and then dispatched to an upload server for processing in a batch manner. If the application is not able to write to the user machine's local file system, the usage information is sent to a remote logging server, preferably on a just-in-time basis, and then the log file is generated on the logging server. In either case, preferably the usage information that is tracked comprises “features,” “faults” and “failures” of the application, independent of platform, location, and number of deployed application instances.
Abstract:
A light pointer is selectively activated to direct a light beam onto an interactive display surface, forming a pattern of light that is detected by a light sensor disposed within an interactive display table. The waveband of the light produced by the light pointer is selected to correspond to a waveband to which the light sensor responds, enabling the light sensor to detect the position of the pattern on the interactive display surface, as well as characteristics that enable the location and orientation of the light pointer to be determined. Specifically, the shape and size of the pattern, and the intensity of light forming the pattern are detected by the light sensor and are processed to determine the orientation of the light pointer and its distance from the interactive display surface. The pattern may comprise various shapes, such as circles, arrows, and crosshairs.