Abstract:
PROBLEM TO BE SOLVED: To enable audible list traverse with respect to an interface technology. SOLUTION: A system includes logic such as hardware and/or code to implement user interface for traversal of long sorted lists, via audible mapping of the lists, using sensor based gesture recognition, audio and tactile feedback and button selection while on the movement. Such user interface modalities are physically small in size, enabling a user to be truly mobile by reducing the cognitive load required to operate the device. The user interface may be divided across multiple worn devices, such as a mobile device, watch, earpiece, and ring. COPYRIGHT: (C)2010,JPO&INPIT
Abstract:
PROBLEM TO BE SOLVED: To provide an improved user interface. SOLUTION: An embodiment includes logic such as hardware and/or a code to map content of a device such as a mobile device, a laptop, a desktop, or a server to a two-dimensional field or table, and to map a pose or movement of a user to coordinates within the table to provide access to the content by the user. The embodiment utilizes a wireless peripheral such as a watch, a ring, or a headset connected to a mobile Internet device including an audible user interface and an auditory mapper for accessing the content. The audible user interface is connected with the peripherals by communication to receive pose data for describing motion or movement related to one or more of the peripherals and to provide feedback of audible items or the like and, in some embodiments, other feedback. COPYRIGHT: (C)2010,JPO&INPIT
Abstract:
Embodiments of the invention describe a system to efficiently execute gesture recognition algorithms. Embodiments of the invention describe a power efficient staged gesture recognition pipeline including multimodal interaction detection, context based optimized recognition, and context based optimized training and continuous learning. Embodiments of the invention further describe a system to accommodate many types of algorithms depending on the type of gesture that is needed in any particular situation. Examples of recognition algorithms include but are not limited to, HMM for complex dynamic gestures (e.g. write a number in the air), Decision Trees (DT) for static poses, peak detection for coarse shake/whack gestures or inertial methods (INS) for pitch/roll detection.
Abstract:
Through status awareness, a handheld communications device may determine the location, activity, and/or physical or emotional state of the user. This information may in turn be used for various purposes, such as 1) determining how to alert the user of an incoming communication, 2) determining what format to use for communicating with the user, and 3) determining how to present the user's status to another person's communication device.
Abstract:
System and techniques for user input via elastic deformation of a material are described herein. The morphology of an elastic material may be observed with a sensor. The observations may include a first and a second morphological sample of the elastic material. The first and second morphological samples may be compared against each other to ascertain a variance. The variance may be filtered to produce an output. The output may be translated into a user input parameter. A device action corresponding to the user input parameter may be invoked.
Abstract:
An apparatus may include a memory to store a recorded video. The apparatus may further include an interface to receive at least one set of sensor information based on sensor data that is recorded concurrently with the recorded video and a video clip creation module to identify a sensor event from the at least one set of sensor information and to generate a video clip based upon the sensor event, the video clip comprising video content from the recorded video that is synchronized to the sensor event.
Abstract:
Various systems and methods for a wearable input device are described herein. A textile-based wearable system for providing user input to a device comprises a first sensor integrated into the textile-based wearable system, the first sensor to produce a first distortion value representing a distortion of the first sensor. The system also includes an interface module to detect the first distortion value, the distortion value measured with respect to an initial position, and transmit the first distortion value to the device, the device having a user interface, the user interface to be modified, responsive to receiving the first distortion value.
Abstract:
Various systems and methods for transmitting a message to a secondary computing device are described herein. An apparatus comprises a context processing module, a context-aware message mode module, and a message retrieval module. The context processing module retrieves a context of a user of a primary computing device. The context-aware message mode module identifies a message mode for communicating with a secondary computing device of the user based on the context. A message retrieval receives a communication message at the primary computing device, determines that the communication message is to be transmitted to the secondary computing device of the user based on the message mode, and based on the determining, translates the communication message into a translated message according to the message mode and transmits the translated message to the secondary computing device from the primary computing device.
Abstract:
Vorrichtungen, Systeme und/oder Verfahren können eine Energieverwaltung bereitstellen. Ein Anbringungsabschnitt kann eine Vorrichtung an einem Benutzer befestigen. In einem Beispiel enthält eine Vorrichtung eine tragbare Armbanduhr mit einem Armbandanbringungsabschnitt. Ein mit einem Benutzerzustand korrespondierender Kontext lässt sich bestimmen aus Kontextdaten wie zum Beispiel Sensorkontextdaten, Datenbankkontextdaten, Begleitkontextdaten und/oder Benutzerkontextdaten. Der Kontext kann verwendet werden, um einen auf einen Teil der Vorrichtung zum Verwalten von Energie anwendbaren Energiemodus festzulegen.
Abstract:
Technologien für zeitlich verzögerte Erweiterte-Realität (Augmented Reality, AR)-Präsentationen schließen das Ermitteln eines Standorts einer Mehrzahl von AR-Benutzersystemen, die sich innerhalb des Präsentationsorts befinden, und das Ermitteln einer zeitlichen Verzögerung eines AR-Sinnesreizereignisses einer AR-Präsentation, das am Präsentationsort für jedes AR-Benutzersystem präsentiert werden soll, basierend auf dem Standort des entsprechenden AR-Benutzersystems innerhalb des Präsentationsorts ein. Das AR-Sinnesreizereignis wird jedem AR-Benutzersystem basierend auf der ermittelten, dem entsprechenden AR-Benutzersystem zugeordneten zeitlichen Verzögerung präsentiert. Jedes AR-Benutzersystem generiert das AR-Sinnesreizereignis basierend auf einem Zeitablaufparameter, der die zeitliche Verzögerung für das entsprechende AR-Benutzersystem definiert, sodass die Generierung des AR-Sinnesreizereignisses basierend auf dem Standort des AR-Benutzersystems innerhalb des Präsentationsorts zeitlich verzögert ist.