-
公开(公告)号:US12272370B2
公开(公告)日:2025-04-08
申请号:US18376438
申请日:2023-10-03
Applicant: Apple Inc.
Inventor: Carlos M. Avendano , John Woodruff , Jonathan Huang , Mehrez Souden , Andreas Koutrouvelis
IPC: H04R29/00 , G06N20/00 , G10L21/0232 , G10L21/028 , H04R1/10
Abstract: Implementations of the subject technology provide systems and methods for providing audio source separation for audio input, such as for audio devices having limited power and/or computing resources. The subject technology may allow an audio device to leverage processing and/or power resources of a companion device that is communicatively coupled to the audio device. The companion device may identify a noise condition of the audio device, select a source separation model based on the noise condition, and provide the source separation model to the audio device. In this way, the audio device can provide audio source separation functionality using a relatively small footprint source separation model that is specific to the noise condition in which the audio device is operated.
-
公开(公告)号:US11490218B1
公开(公告)日:2022-11-01
申请号:US17134097
申请日:2020-12-24
Applicant: Apple Inc.
Inventor: Symeon Delikaris Manias , Mehrez Souden
Abstract: A device for reproducing spatial audio using a machine learning model may include at least one processor configured to receive multiple audio signals corresponding to a sound scene captured by respective microphones of a device. The at least one processor may be further configured to provide the multiple audio signals to a machine learning model, the machine learning model having been trained based at least in part on a target rendering configuration. The at least one processor may be further configured to provide, responsive to providing the multiple audio signals to the machine learning model, multichannel audio signals that comprise a spatial reproduction of the sound scene in accordance with the target rendering configuration.
-
公开(公告)号:US12141347B1
公开(公告)日:2024-11-12
申请号:US18055600
申请日:2022-11-15
Applicant: Apple Inc.
Inventor: Mehrez Souden , Symeon Delikaris Manias , Ante Jukic , John Woodruff , Joshua D. Atkins
Abstract: An audio processing device may generate a plurality of microphone signals from a plurality of microphones of the audio processing device. The audio processing device may determine a gaze of a user who is wearing a playback device that is separate from the audio processing device, the gaze of the user being determined relative to the audio processing device. The audio processing device may extract speech that correlates to the gaze of the user, from the plurality of microphone signals of the audio processing device by applying the plurality of microphone signals of the audio processing device and the gaze of the user to a machine learning model. The extracted speech may be played to the user through the playback device.
-
公开(公告)号:US12010490B1
公开(公告)日:2024-06-11
申请号:US18149659
申请日:2023-01-03
Applicant: Apple Inc.
Inventor: Symeon Delikaris Manias , Mehrez Souden , Ante Jukic , Matthew S. Connolly , Sabine Webel , Ronald J. Guglielmone, Jr.
Abstract: An audio renderer can have a machine learning model that jointly processes audio and visual information of an audiovisual recording. The audio renderer can generate output audio channels. Sounds captured in the audiovisual recording and present in the output audio channels are spatially mapped based on the joint processing of the audio and visual information by the machine learning model. Other aspects are described.
-
公开(公告)号:US11849291B2
公开(公告)日:2023-12-19
申请号:US17322539
申请日:2021-05-17
Applicant: Apple Inc.
Inventor: Mehrez Souden , Jason Wung , Ante Jukic , Ramin Pishehvar , Joshua D. Atkins
IPC: H04R3/04 , H04R3/00 , H04R5/04 , G10L25/78 , G10L21/0216 , G10L21/0208 , H04M9/08
CPC classification number: H04R3/04 , G10L21/0216 , G10L25/78 , H04R3/005 , H04R5/04 , G10L2021/02082 , G10L2021/02166 , H04M9/082
Abstract: A plurality of microphone signals can be captured with a plurality of microphones of the device. One or more echo dominant audio signals can be determined based on a pick-up beam directed towards one or more speakers of a playback device. Sound that is emitted from the one or more speakers and sensed by the plurality of microphones can be removed from plurality of microphone signals, by using the one or more echo dominant audio signals as a reference, resulting in clean audio.
-
公开(公告)号:US11532306B2
公开(公告)日:2022-12-20
申请号:US17111132
申请日:2020-12-03
Applicant: Apple Inc.
Inventor: Yoon Kim , John Bridle , Joshua D. Atkins , Feipeng Li , Mehrez Souden
IPC: G10L15/22 , H04R1/40 , G10L15/08 , G10L15/04 , H04R3/00 , G10L15/30 , G10L15/18 , G10L15/28 , G10L21/0216 , G10L25/51 , H04R27/00
Abstract: Systems and processes for operating an intelligent automated assistant are provided. In accordance with one example, a method includes, at an electronic device with one or more processors, memory, and a plurality of microphones, sampling, at each of the plurality of microphones of the electronic device, an audio signal to obtain a plurality of audio signals; processing the plurality of audio signals to obtain a plurality of audio streams; and determining, based on the plurality of audio streams, whether any of the plurality of audio signals corresponds to a spoken trigger. The method further includes, in accordance with a determination that the plurality of audio signals corresponds to the spoken trigger, initiating a session of the digital assistant; and in accordance with a determination that the plurality of audio signals does not correspond to the spoken trigger, foregoing initiating a session of the digital assistant.
-
公开(公告)号:US11341988B1
公开(公告)日:2022-05-24
申请号:US16578802
申请日:2019-09-23
Applicant: Apple Inc.
Inventor: Ramin Pishehvar , Feiping Li , Ante Jukic , Mehrez Souden , Joshua D. Atkins
Abstract: A hybrid machine learning-based and DSP statistical post-processing technique is disclosed for voice activity detection. The hybrid technique may use a DNN model with a small context window to estimate the probability of speech by frames. The DSP statistical post-processing stage operates on the frame-based speech probabilities from the DNN model to smooth the probabilities and to reduce transitions between speech and non-speech states. The hybrid technique may estimate the soft decision on detected speech in each frame based on the smoothed probabilities, generate a hard decision using a threshold, detect a complete utterance that may include brief pauses, and estimate the end point of the utterance. The hybrid voice activity detection technique may incorporate a target directional probability estimator to estimate the direction of the speech source. The DSP statistical post-processing module may use the direction of the speech source to inform the estimates of the voice activity.
-
公开(公告)号:US20210020189A1
公开(公告)日:2021-01-21
申请号:US16516780
申请日:2019-07-19
Applicant: Apple Inc.
Inventor: Ante Jukic , Mehrez Souden , Joshua D. Atkins
Abstract: A learning based system such as a deep neural network (DNN) is disclosed to estimate a distance from a device to a speech source. The deep learning system may estimate the distance of the speech source at each time frame based on speech signals received by a compact microphone array. Supervised deep learning may be used to learn the effect of the acoustic environment on the non-linear mapping between the speech signals and the distance using multi-channel training data. The deep learning system may estimate the direct speech component that contains information about the direct signal propagation from the speech source to the microphone array and the reverberant speech signal that contains the reverberation effect and noise. The deep learning system may extract signal characteristics of the direct signal component and the reverberant signal component and estimate the distance based on the extracted signal characteristics using the learned mapping.
-
公开(公告)号:US20240312468A1
公开(公告)日:2024-09-19
申请号:US18605688
申请日:2024-03-14
Applicant: Apple Inc.
Inventor: Ismael H. Nawfal , Symeon Delikaris Manias , Mehrez Souden , Joshua D. Atkins
IPC: G10L19/008 , H04S7/00
CPC classification number: G10L19/008 , H04S7/30 , H04S2420/11
Abstract: A sound scene is represented as first order Ambisonics (FOA) audio. A processor formats each signal of the FOA audio to a stream of audio frames, provides the formatted FOA audio to a machine learning model that reformats the formatted FOA audio in a target or desired higher order Ambisonics (HOA) format, and obtains output audio of the sound scene in the desired HOA format from the machine learning model. The output audio in the desired HOA format may then be rendered according to a playback audio format of choice. Other aspects are also described and claimed.
-
公开(公告)号:US11546692B1
公开(公告)日:2023-01-03
申请号:US17370679
申请日:2021-07-08
Applicant: Apple Inc.
Inventor: Symeon Delikaris Manias , Mehrez Souden , Ante Jukic , Matthew S. Connolly , Sabine Webel , Ronald J. Guglielmone, Jr.
Abstract: An audio renderer can have a machine learning model that jointly processes audio and visual information of an audiovisual recording. The audio renderer can generate output audio channels. Sounds captured in the audiovisual recording and present in the output audio channels are spatially mapped based on the joint processing of the audio and visual information by the machine learning model. Other aspects are described.
-
-
-
-
-
-
-
-
-