Abstract:
A system described herein provides six degrees of freedom with respect to a three-dimensional object rendered on a multi-touch display through utilization of three touch points. Multiple axes of rotation are established based at least in part upon location of a first touch point and a second touch point on a multi-touch display. Movement of a third touch point controls appearance of rotation of the three-dimensional object about two axes, and rotational movement of the first touch point relative to the second touch point controls appearance of rotation of the three-dimensional object about a third axis.
Abstract:
A security system architecture and method of operation that combines a local security network with control panel and sensors, a central monitoring station (CMS), and a separate operator computer server that provides a web portal for both the homeowner and CMS, that maintains a persistent connection between the control panel and CMS allowing failsafe dual-path signaling. This dual-path signaling technique is extended to provide an effective “smash and grab alarm”, and various approaches to dual-path signal management are disclosed including handshaking, persistent domain monitoring, relayed Operator 3-to-CMS signaling, etc. Improved processes for remotely accessing video are also disclosed along with an improved process for remote control panel configuration, and control panel interfacing with home automation appliances.
Abstract:
Physical movement of a human subject may be guided by a visual cue. A physical environment may be observed to identify a current position of a body portion of the human subject. A model path of travel may be obtained for the body portion of the human subject. The visual cue may be projected onto the human subject and/or into a field of view of the human subject. The visual cue may indicate the model path of travel for the body portion of the human subject.
Abstract:
Data captured with respect to a human may be analyzed and applied to a visual representation of a user such that the visual representation begins to reflect the behavioral characteristics of the user. For example, a system may have a capture device that captures data about the user in the physical space. The system may identify the user's characteristics, tendencies, voice patterns, behaviors, gestures, etc. Over time, the system may learn a user's tendencies and intelligently apply animations to the user's avatar such that the avatar behaves and responds in accordance with the identified behaviors of the user. The animations applied to the avatar may be animations selected from a library of pre-packaged animations, or the animations may be entered and recorded by the user into the avatar's avatar library.
Abstract:
Embodiments for a depth sensing camera with a wide field of view are disclosed. In one example, a depth sensing camera comprises an illumination light projection subsystem, an image detection subsystem configured to acquire image data having a wide angle field of view, a logic subsystem configured to execute instructions, and a data-holding subsystem comprising stored instructions executable by the logic subsystem to control projection of illumination light and to determine depth values from image data acquired via the image sensor. The image detection subsystem comprises an image sensor and one or more lenses.
Abstract:
A system is disclosed for detecting or confirming gestures performed by a user by identifying a vector formed by non-adjacent joints and identifying the angle the vector forms with a reference point. Thus, the system skips one or more intermediate joints between an end joint and a proximal joint closer to the body core of a user. Skipping one or more intermediate joints results in a more reliable indication of the position or movement performed by the user, and consequently a more reliable indication of a given gesture.
Abstract:
An impeller comprising: a top shroud; a bottom shroud, a plurality of vanes extending from the top shroud to the bottom shroud, each said vane including a top edge at a radially inner portion of the vane in contact with the top shroud and a bottom edge at a radially outer portion of the vane in contact with the bottom shroud, such that a radially inner portion of the vane at the bottom edge of each vane is not in contact with or adjacent the bottom shroud and a radially outer portion of the vane at the top edge of each vane is not in contact with or adjacent the top shroud.
Abstract:
Physical movement of a human subject may be guided by a visual cue. A physical environment may be observed to identify a current position of a body portion of the human subject. A model path of travel may be obtained for the body portion of the human subject. The visual cue may be projected onto the human subject and/or into a field of view of the human subject. The visual cue may indicate the model path of travel for the body portion of the human subject.
Abstract:
Using facial recognition and gesture/body posture recognition techniques, a system can naturally convey the emotions and attitudes of a user via the user's visual representation. Techniques may comprise customizing a visual representation of a user based on detectable characteristics, deducting a user's temperament from the detectable characteristics, and applying attributes indicative of the temperament to the visual representation in real time. Techniques may also comprise processing changes to the user's characteristics in the physical space and updating the visual representation in real time. For example, the system may track a user's facial expressions and body movements to identify a temperament and then apply attributes indicative of that temperament to the visual representation. Thus, a visual representation of a user, such as an avatar or fanciful character, can reflect the user's expressions and moods in real time.