Abstract:
An HVAC controller may be controlled in response to a natural language audio message that is not recognizable by the HVAC controller as a command, where the natural language audio message is translated into a command recognizable by the HVAC controller. Voice recognition software identifies a trigger phrase included in the natural language audio message and in response the HVAC controller may perform an action. The voice recognition software may be used to create a natural language text based message from a recorded voice message or streamed voice message, where the natural language text based message is translated into the command recognizable by the HVAC controller. In response to the command, the HVAC controller may perform an action and/or respond with a natural language text based, audio, or video message. A user may communicate with the thermostat via the thermostat and/or a remote device.
Abstract:
Surveillance systems and methods 230 can include detecting a number of interactions within a building 232, determining an event based on the number of interactions 234, and sending a message to a number of contacts relating to the event 236. The system can also comprise a computing device hub including instructions to: detect a number of interactions with a first area; receive a number of detected interactions from sensors within a second area; determine a response based on the number of detected interactions from the first and second areas, wherein the response includes altering a number of environmental settings; determine a number of contacts based on the response; and send a message to the number of contacts. The interaction are detected by sensors, these can range from temperature to motion sensors. The contacts could be the emergency services encase of a break-in or fire, or even the home-owner. The message sent could be a recorded audio message. The environmental settings change could be the temperature settings of the building or area.
Abstract:
Sound input received from microphones 118-1-M is compared to a spatial audio database 114 in order to discriminate a speech command 116-2 from background noise 116-1 according to eg. a threshold, from which an instruction may determined to a particular confidence level based on the likelihood that it matches the command according to an Automatic Speech Recognition engine (228, fig. 2). The array of microphones may employ a spatio-temporal filter and feedback to modify the beam angle and beam width, such that the database contains background noises collected from an area in a spatial format (ie. the angular information 118-P).
Abstract:
Abstract A first mobile device includes a location processor, a communication processor, and a display, and a second mobile device includes a location processor and a communication processor. The first mobile device is configured 5 to wirelessly communicate with the second mobile device, and the first mobile device is configured to display a superimposed icon representing a location of the second mobile device as viewed from the perspective of the first mobile device when the first mobile device is pointed in the direction of the second mobile device. 230A ~-0 231 L] L 233 232 210 220 PROCESSING DATA 234- 00 UNIT BASE 231 - 233 232 - -240 234 00 24 247 231 - 233
Abstract:
Abstract A first mobile device includes a location processor, a communication processor, and a display, and a second mobile device includes a location processor and a communication processor. The first mobile device is configured 5 to wirelessly communicate with the second mobile device, and the first mobile device is configured to display a superimposed icon representing a location of the second mobile device as viewed from the perspective of the first mobile device when the first mobile device is pointed in the direction of the second mobile device. 230A ~-0 231 L] L 233 232 210 220 PROCESSING DATA 234- 00 UNIT BASE 231 - 233 232 - -240 234 00 24 247 231 - 233
Abstract:
In a speech recognition system 110, each of an array of microphones (112, fig. 1A) captures a signal from a respective segregated area 122-N for separate recognition processing of a spoken command, which may further be separated from background noise. A beamforming algorithm may be used to spatially segregate eg. a room into different angular portions. The Automatic Speech Recognition ASR) engine may be located remotely across a network (fig. 2), as may the Digital Signal Processor (DSP) itself (fig. 3).
Abstract:
Voice commands received at a number of microphones 222-N are determined (via eg. acoustic voice models) to a calculated confidence level (eg. percentage likelihood of a match with a command vocabulary entry) at an Automatic Speech Recognition engine 232, from which feedback information (eg. speaker location) is returned 236 to an adaptive beamformer 226 in order to modify the beam pattern (eg. width and direction) of the microphone array. The ASR may reside on a remote server (432) in a cloud computing network (462), and residual echo suppression 228 and adaptive noise cancellation 230 may also be employed.