Abstract:
A system comprises an ear-worn electronic device configured to be worn by a wearer. The ear-worn electronic device comprises a processor and memory coupled to the processor. The memory is configured to store an annoying sound dictionary representative of a plurality of annoying sounds pre-identified by the wearer. A microphone is coupled to the processor and configured to monitor an acoustic environment of the wearer. A speaker or a receiver is coupled to the processor. The processor is configured to identify different background noises present in the acoustic environment, determine which of the background noises correspond to one or more of the plurality of annoying sounds, and attenuate the one or more annoying sounds in an output signal provided to the speaker or receiver.
Abstract:
A hearing system includes one or more hearing devices configured to be worn by a user. Each hearing device includes a signal source that provides an input electrical signal representing a sound of a virtual source. A filter implements a head related transfer function (HRTF) to add spatialization cues associated with a virtual location of the virtual source to the electrical signal and outputs a filtered electrical signal that includes the spatialization cues. A speaker of the hearing device converts the filtered electrical signal into an acoustic signal and plays the acoustic signal to the user. The system includes motion tracking circuitry that tracks motion of the user as the user moves in a direction of a perceived location that the user perceives to be the virtual location of the virtual source. Head related transfer function (HRTF) individualization circuitry determines a difference between the virtual location and the perceived location in response to the motion of the user. The HRTF individualization circuitry individualizes the HRTF based on the difference.
Abstract:
A hearing system includes one or more hearing devices configured to be worn by a user. Each hearing device includes a signal source that provides an input electrical signal representing a sound of a virtual source. A filter implements a head related transfer function (HRTF) to add spatialization cues associated with a virtual location of the virtual source to the electrical signal and outputs a filtered electrical signal that includes the spatialization cues. A speaker of the hearing device converts the filtered electrical signal into an acoustic signal and plays the acoustic signal to the user. The system includes motion tracking circuitry that tracks motion of the user as the user moves in a direction of a perceived location that the user perceives to be the virtual location of the virtual source. Head related transfer function (HRTF) individualization circuitry determines a difference between the virtual location and the perceived location in response to the motion of the user. The HRTF individualization circuitry individualizes the HRTF based on the difference.
Abstract:
A hearing system includes one or more hearing devices configured to be worn by a user. Each hearing device includes a signal source that provides an input electrical signal representing a sound of a virtual source. A filter implements a head related transfer function (HRTF) to add spatialization cues associated with a virtual location of the virtual source to the electrical signal and outputs a filtered electrical signal that includes the spatialization cues. A speaker of the hearing device converts the filtered electrical signal into an acoustic signal and plays the acoustic signal to the user. The system includes motion tracking circuitry that tracks motion of the user as the user moves in a direction of a perceived location that the user perceives to be the virtual location of the virtual source. Head related transfer function (HRTF) individualization circuitry determines a difference between the virtual location and the perceived location in response to the motion of the user. The HRTF individualization circuitry individualizes the HRTF based on the difference.
Abstract:
A hearing device comprises a processor configured to generate a virtual auditory display comprising a sound field, a plurality of disparate sound field zones, and a plurality of quiet zones that provide acoustic contrast between the sound field zones. The sound field zones and the quiet zones remain positionally stationary within the sound field. One or more sensors are configured to sense a plurality of inputs from the wearer. The processor is configured to facilitate movement of the wearer within the sound field in response to a navigation input received from the one or more sensors. The processor is also configured to select one of the sound field zones for playback via a speaker or actuation of a hearing device function in response to a selection input received from the one or more sensors.
Abstract:
The present subject matter can improve robustness of performance of acoustic feedback cancellation in the presence of strong acoustic disturbances. In various embodiments, an optimization criterion determined to enhance robustness of an adaptive feedback canceller in an audio device against disturbances in an incoming audio signal can be applied such that the adaptive feedback controller remains in a converged state in response to presence of the disturbances.