Neural network model for cochlear mechanics and processing
Abstract:
A method and hearing device (100) for emulating cochlear processing of auditory stimuli are disclosed, in which a multilayer convolutional encoder-decoder neural network (10) sequentially compresses and then decompresses a time-domain input comprising a plurality of samples. At least one nonlinear unit for applying a nonlinear transformation is mimicking a level-dependent cochlear filter tuning associated with cochlear mechanics and outer hair cells. Other described modules cover inner-hair-cell and auditory-nerve fiber processing. A plurality of shortcut connections (15) is directly forwarding inputs between convolutional layers of the encoder (11) and the decoder (12). An output layer (14) is generating, for each input to the neural network, N output sequences of cochlear response parameters corresponding to N emulated cochlear filters associated with N different center frequencies to span a cochlear tonotopic place-frequency map. A transducer (105) of the hearing device converts output sequences generated by the neural network (10) into auditory-stimulus dependent audible time-varying pressure signals, or basilar-membrane vibrations, inner-hair-cell potentials, auditory-nerve firing patterns or population coding thereof for auditory or augmented hearing applications.
Public/Granted literature
Information query
Patent Agency Ranking
0/0