Abstract:
A device includes a sound wave reception unit that receives a sound wave output from a mobile device, through a sound wave reception device; a control information acquisition unit that acquires control information associated with operation of the device from the received sound wave; and an operation performance unit that performs the operation based on the control information. The control information acquisition unit determines a frequency band, to which at least one frequency identified from a certain frame within the received sound wave corresponds, from an audible sound wave frequency band and a non-audible sound wave frequency band, and partial information based on the frequency band and the at least one identified frequency, and acquires control information corresponding to the received sound wave based on each of the determined partial information.
Abstract:
A system includes a voice converter converting a first voice command into a first electrical command and a command library having library contents. A language responsiveness module (LRM) stores the first electrical command in a temporary set when a first control command cannot be determined from the library contents. A voice prompt module receives a second voice command when the first control command cannot be determined from the library contents. The voice converter converts a second voice command into a second electrical command corresponding to the second voice command. The LRM compares the second electrical command to the command library. The LRM determines a second control command corresponding to the second electrical command in response to comparing the second voice command to the command library and stores the first voice command in the command library after determining the control command corresponding to the second voice command.
Abstract:
In some implementations a non-transitory storage device on a smartphone having a multi-sensory-recognition engine stored thereon is coupled to a microcontroller, a device-controller for an electrical device that is wirelessly coupled to the microcontroller, and at least one of a plurality of sensors is operably coupled to the microcontroller.
Abstract:
A system may include a personal electronic device having a housing and an ultra-wide band transceiver disposed within the housing. The system may also include a stereophonic system for sending audio information to a right ear and a left ear of a user, the stereophonic system having a second ultra-wideband transceiver. The ultra-wideband transceiver of the personal electronic device and the second ultra-wideband transceiver of the stereophonic system are adapted for providing audio communications therebetween. The stereophonic system may include a switch for switching between the plurality of personal electronic devices. The switch may be implemented in software or hardware.
Abstract:
A remote controller includes a housing, a direction sensor, a microphone, a controller, and a wireless transmitter. A control method of the remote controller includes detecting an angle between an axis of a remote controller and a vertical axis, enabling a microphone of the remote controller when the angle is within a predetermined range in order to generate a voice signal according to a voice command, and generating a first control signal according the voice signal and transmit the first control signal wirelessly.
Abstract:
A system may include a personal electronic device having a housing and an ultra-wide band transceiver disposed within the housing. The system may also include a stereophonic system for sending audio information to a right ear and a left ear of a user, the stereophonic system having a second ultra-wideband transceiver. The ultra-wideband transceiver of the personal electronic device and the second ultra-wideband transceiver of the stereophonic system are adapted for providing audio communications therebetween. The stereophonic system may include a switch for switching between the plurality of personal electronic devices. The switch may be implemented in software or hardware.
Abstract:
A control unit (101), such as a remote control device, includes a profile selector (104). The profile selector (104), which may be a single profile selector button integrated into the side or top of a remote control, allows quick and simple selection of an operating mode or user profile. The control unit (101) includes an indicator (107) that provides indicia of the currently selected mode or profile. Examples of indicators include multicolored lights and display devices. Where multicolored lights are used as the indicator (107), actuation of the profile selector (104) causes the indicator (107) to change from a first color to a second color.
Abstract:
A method and system for operating a remotely controlled device may use multimodal remote control commands that include a gesture command and a speech command. The gesture command may be interpreted from a gesture performed by a user, while the speech command may be interpreted from speech utterances made by the user. The gesture and speech utterances may be simultaneously received by the remotely controlled device in response to displaying a user interface configured to receive multimodal commands.
Abstract:
The present invention is directed toward a system and process that controls a group of networked electronic components using a multimodal integration scheme in which inputs from a speech recognition subsystem, gesture recognition subsystem employing a wireless pointing device and pointing analysis subsystem also employing the pointing device, are combined to determine what component a user wants to control and what control action is desired. In this multimodal integration scheme, the desired action concerning an electronic component is decomposed into a command and a referent pair. The referent can be identified using the pointing device to identify the component by pointing at the component or an object associated with it, by using speech recognition, or both. The command may be specified by pressing a button on the pointing device, by a gesture performed with the pointing device, by a speech recognition event, or by any combination of these inputs.
Abstract:
A media client receives, from a remote control device, a signal to launch a selected interactive television application and sends, to the remote control device, a client program for reprogramming buttons on the remote control device. The media client sends, to the remote control device, a script, for button functions of the remote control device, which are based on the selected interactive television application. The remote control device executes the script on the client program to reprogram the buttons functions. The media client presents, on a display device, a button map that corresponds to the script, and receives, from the remote control device, a signal based on the scripting.