Abstract:
An ear-worn electronic device comprises at least one microphone configured to sense sound in an acoustic environment, an acoustic transducer, and a non-volatile memory configured to store a plurality of parameter value sets, each of the parameter value sets associated with a different acoustic environment. A control input is configured to receive a control input signal produced by at least one of a user-actuatable control of the ear-worn electronic device and an external electronic device communicatively coupled to the ear-worn electronic device in response to a user action. A processor is operably coupled to the microphone, the acoustic transducer, the non-volatile memory, and the control input. The processor is configured to classify the acoustic environment using the sensed sound and apply, in response to the control input signal, one of the parameter value sets appropriate for the classification.
Abstract:
In an audio signal, one or more processing circuits recognize spoken content in a user's own speech signal using speech recognition and natural language understanding. The spoken content describes a listening difficulty of the user. The one or more processing circuits generate, based on the spoken content, one or more actions for hearing devices and feedback for the user. The one or more actions attempt to resolve the listening difficulty. Additionally, the one or more processing circuits convert the user feedback to verbal feedback using speech synthesis and transmit the one or more actions and the verbal feedback to the hearing devices via a body-worn device. The hearing devices are configured to perform the one or more actions and play back the verbal feedback to the user.
Abstract:
In an audio signal, one or more processing circuits recognize spoken content in a user's own speech signal using speech recognition and natural language understanding. The spoken content describes a listening difficulty of the user. The one or more processing circuits generate, based on the spoken content, one or more actions for hearing devices and feedback for the user. The one or more actions attempt to resolve the listening difficulty. Additionally, the one or more processing circuits convert the user feedback to verbal feedback using speech synthesis and transmit the one or more actions and the verbal feedback to the hearing devices via a body-worn device. The hearing devices are configured to perform the one or more actions and play back the verbal feedback to the user.
Abstract:
The signal processing functions of a hearing aid as described above necessarily cause some delay between the time the audio signal is received by the microphone or wireless receiver and the time that the audio is actually produced by the output transducer. In some situations, signal processing incorporating longer delays may be better able to improve signal-to-noise ratio (SNR) or other functional parameters for a hearing aid wearer, but a balance should be struck between these positive effects of delay and other negative effects. The techniques described herein address the problem of balancing the positive and negative effects of delay.
Abstract:
Disclosed herein, among other things, are apparatus and methods for annoyance perception and modeling for hearing-impaired listeners. One aspect of the present subject matter includes a method for improving noise cancellation for a wearer of a hearing assistance device having an adaptive filter. In various embodiments, the method includes calculating an annoyance measure or other perceptual measure based on a residual signal in an ear of the wearer, the wearer's hearing loss, and the wearer's preference. A spectral weighting function is estimated based on a ratio of the annoyance measure or other perceptual measure and spectral energy. The spectral weighting function is incorporated into a cost function for an update of the adaptive filter. The method includes minimizing the annoyance or other perceptual measure based cost function to achieve perceptually motivated adaptive noise cancellation, in various embodiments.
Abstract:
An ear-worn electronic device comprises a microphone configured to sense sound in an acoustic environment, an acoustic transducer, and a non-volatile memory configured to store a plurality of parameter value sets, each of the parameter value sets associated with a different acoustic environment. A control input is configured to receive a control input signal produced by at least one of a user-actuatable control of the ear-worn electronic device and an external electronic device communicatively coupled to the ear-worn electronic device in response to a user action. A processor is configured to classify the acoustic environment using the sensed sound and determine a listening intent preference of the user. The processor is configured to apply, in response to the control input signal, one of the parameter value sets appropriate for the classification and the listening intent preference of the user.
Abstract:
Various embodiments of a monitoring system are disclosed. The monitoring system includes first and second sensors each adapted to detect a characteristic of a subject of the system and generate data representative of the characteristic of the subject, and a controller operatively connected to the first and second sensors. The controller is adapted to receive data representative of first and second characteristics of the subject from the first and second sensors, and determine statistics for first and second condition substates of the subject over a monitoring time period based upon the data received from the first and second sensors. The controller is further adapted to compare the statistics of the first and second condition substates, confirm the first condition substate if it is substantially similar to the second condition substate, and determine the statistics of an overall condition state of the subject based upon the confirmed first condition substate.
Abstract:
A set of one or more processing circuits obtains eye movement-related eardrum oscillation (EMREO)-related measurements from one or more EMREO sensors of a hearing instrument. The EMREO sensors are located in an ear canal of a user of the hearing instrument and are configured to detect environmental signals of EMREOs of an eardrum of the user of the hearing instrument. The one or more processing circuits may perform an action based on the EMREO-related measurements.
Abstract:
A hearing aid includes a sound classification module to classify environmental sound sensed by a microphone. The sound classification module executes an advanced sound classification algorithm. The hearing aid then processes the sound according to the classification.
Abstract:
A set of one or more processing circuits obtains eye movement-related eardrum oscillation (EMREO)-related measurements from one or more EMREO sensors of a hearing instrument. The EMREO sensors are located in an ear canal of a user of the hearing instrument and are configured to detect environmental signals of EMREOs of an eardrum of the user of the hearing instrument. The one or more processing circuits may perform an action based on the EMREO-related measurements.