Abstract:
Voice command recognition and natural language recognition are carried out using an accelerometer (14) that senses signals from the vibrations of one or more bones of a user and receives no audio input. Since word recognition is made possible using solely the signal from the accelerometer from a person's bone conduction as they speak, an acoustic microphone is not needed and thus not used to collect data for word recognition. A housing contains the accelerometer (14) and a processor (16), both within the same housing (10). The accelerometer is preferably a MEMS accelerometer which is capable of sensing the vibrations that are present in the bone of a user as the user is speaking words. A machine learning algorithm is applied to the collected data to correctly recognize words spoken by a person with significant difficulties in creating audible language.
Abstract:
A capacitive touch screen of e.g. a mobile communications device such as a smart phone or tablet is operated by producing a capacitance map (CM) of capacitance values for the screen (S), wherein the capacitance values are indicative of locations of the screen exposed to touch by a user, and by identifying locations of the screen exposed to touch by a user by comparing the capacitance values against settings (2041 to 2045) of sensing thresholds. Descriptor processing (200) is applied to the capacitance map (CM) to extract a set of descriptors indicative of said screen (S) being in one of a plurality of different operating conditions. A set of rules is applied (202) to these descriptors (200) to identify one of a plurality of different operating conditions (Untouched water, Wet fingers float, Wet fingers grip, Dry fingers float, Dry fingers grip, Dry Stylus grip, Dry stuylus float), and selecting the setting (2041 to 2045) of sensing thresholds as a function of the operating condition thus identified.
Abstract:
A capacitive touch screen of e.g. a mobile communications device such as a smart phone or tablet is operated by producing a capacitance map (CM) of capacitance values for the screen (S), wherein the capacitance values are indicative of locations of the screen exposed to touch by a user, and by identifying locations of the screen exposed to touch by a user by comparing the capacitance values against settings (2041 to 2045) of sensing thresholds. Descriptor processing (200) is applied to the capacitance map (CM) to extract a set of descriptors indicative of said screen (S) being in one of a plurality of different operating conditions. A set of rules is applied (202) to these descriptors (200) to identify one of a plurality of different operating conditions (Untouched water, Wet fingers float, Wet fingers grip, Dry fingers float, Dry fingers grip, Dry Stylus grip, Dry stuylus float), and selecting the setting (2041 to 2045) of sensing thresholds as a function of the operating condition thus identified.