Sound source distance estimation
    1.
    发明专利

    公开(公告)号:GB2563670A

    公开(公告)日:2018-12-26

    申请号:GB201710083

    申请日:2017-06-23

    Abstract: An apparatus for generating at least one distance estimate to at least one sound source within a sound scene comprising the least one sound source, the apparatus configured to: receive at least two audio signals from a microphone array 105 located within the sound scene; receive at least one further audio signal associated with the at least one sound source; determine at least one portion of the at least two audio signals from a microphone array 105 corresponding to the at least one further audio signal associated with the at least one sound source; determine a distance estimate to the at least one sound source based on the at least one portion of the at least two audio signals from a microphone array 105 corresponding to the at least one further audio signal associated with the at least one sound source. The at least one further audio signal from the sound source preferably comes from an additional microphone 103 located close to the sound source.

    Analysing audio data
    2.
    发明专利

    公开(公告)号:GB2533654A

    公开(公告)日:2016-06-29

    申请号:GB201503467

    申请日:2015-03-02

    Abstract: An apparatus is configured to determine one or more acoustic features of an audio track, determine dominance of an audible characteristic in the audio track based at least partly on said one or more acoustic features and store metadata for the audio track indicating said dominance of the audible characteristic. The apparatus may select one or more audio tracks from a catalogue, or store, having a dominance of the audible characteristic within a range of values based on the dominance of the audible characteristic of the audio track and, optionally, user preferences. Information may be output identifying the one or more selected tracks. The audible characteristic may be a contributing musical instrument or a genre. The determined dominance may include one or more of overall dominance for the entire audio track, varying dominance information and information regarding the dominance of an instrument relative to other instruments in the audio track.

    Distributed audio capture and mixing controlling

    公开(公告)号:GB2557219A

    公开(公告)日:2018-06-20

    申请号:GB201620328

    申请日:2016-11-30

    Abstract: The application is directed towards identifying which sound sources are associated with which microphone audio signals. In particular, the application is directed towards events where each sound source a1-a3 has its own individual microphone m1-m3, the position of each sound source is known/determined, but it is not known which microphone 303 is associated with which sound source. Said identification occurs by comparing the individual microphone audio signals to an audio-focussed audio signal captured by a separate microphone array 107, said microphone array being directed towards one of the sound sources. Preferably each sound source is associated with a positioning tag p1-4 which allows its position to be determined. Preferably the comparison of audio signals comprises aligning the microphone signals with the audio-focussed signal, before cross-correlating each signal to identify the best match. Preferably the audio-focussed signal is a beam-formed audio signal.

    Distributed audio capture and mixing

    公开(公告)号:GB2543276A

    公开(公告)日:2017-04-19

    申请号:GB201518025

    申请日:2015-10-12

    Abstract: Apparatus comprising a processor configured to: receive a spatial audio signal associated with a microphone array 113 configured to provide spatial audio capture and at least one additional audio signal associated with an additional microphone 111 such as a Lavalier microphone. The at least one additional microphone signal is delayed by a variable delay determined such that common components of the audio signals are time aligned. The apparatus receives a relative position between a first position associated with the microphone array and a second position associated with the additional microphone. At least two output audio channel signals are generated by processing and mixing the spatial audio signal and the at least one additional audio signal based on the relative position between the first position and the second position, such that the at least two output audio channel signals present an augmented audio scene. The apparatus may be used for spatial reproduction of audio signals in a theatre or lecture hall.

    Spatial audio processing
    5.
    发明专利

    公开(公告)号:GB2562036A

    公开(公告)日:2018-11-07

    申请号:GB201706474

    申请日:2017-04-24

    Abstract: A method for providing a sound object with a spatial extent, wherein a user input 101 selects one or more spatial audio channels (52, fig 4) (e.g. frequency bands), where each spatial audio channel is for rendering at a location within sound space (10, fig 1). In response to the user input, the method automatically changes an allocation of frequency sub-channels (53, fig 4) to the multiple spatial audio channels. The user input may perform a grabbing hand gesture 101A associated with a first portion of sound space, and a dropping hand gesture 101B associated with a second portion of sound space. An input audio signal (113, fig 4) may pass through a short-term Fourier transform (STFT) producing frequency sub-channels (51, fig 4). The sub-channels may be passed into an allocation module (60, fig 4) such as a filter bank, which allocates the frequency sub-channels to one of the multiple spatial audio channels 52. The spatial audio channels may be mixed (74, fig 4) before outputting to different audio device channels (76, fig 4). A user interface 400 may have a visual indication 412 of a lateral extent of the sound object.

    Audio processing
    6.
    发明专利

    公开(公告)号:GB2557241A

    公开(公告)日:2018-06-20

    申请号:GB201620422

    申请日:2016-12-01

    Abstract: In a 3D spatial audio rendering system, a visual image (fig 1A) is analysed to detect corresponding sound objects (fig 1B) whose spatial extent (304) is then modified (eg. increased as in fig 2B) and rendered (404 fig 3) based on the visual analysis (eg. based on the size of the visual object 208, fig 2A). The sound object may be separated into sub-objects to which are applied rules regarding eg. spatial separation of similar frequency bins (as in a set of drums, fig 5).

    Distributed audio capture and mixing

    公开(公告)号:GB2557218A

    公开(公告)日:2018-06-20

    申请号:GB201620325

    申请日:2016-11-30

    Abstract: Controlling a position/orientation of an audio source 101, 103, 105 within an audio scene, based on a received current physical position/orientation of the audio source relative to a capture device 207 (said capture device comprising a microphone array), a received earlier physical position/orientation of the audio source relative to the capture device 101, 103, 105 and a received control parameter. The controllable position 501, 503, 505 of the audio source is between the current and earlier position/orientations. Preferably the capture device comprises a camera. The main embodiment of the invention involves the panning of individual audio tracks so as to better match a viewed audio scene, e.g. If a guitar is on the left of a stage and a piano is on the right, the respective audio tracks of each instrument would be suitably panned so as to match their positions. However, in some situations the panning of audio tracks to match the perceived positions of the audio sources would result in a sub-optimal mix, while an optimal mix would result in confusion to a listener due to the audio sources not matching the viewed image i.e. having spatial congruence.

    Distributed audio capture and mixing controlling

    公开(公告)号:GB2551521A

    公开(公告)日:2017-12-27

    申请号:GB201610733

    申请日:2016-06-20

    Abstract: An apparatus comprises a processor to determine a position for at least one sound source 111 relative to a reference position 121, along with a position and direction for a sound source tracker 131 relative to the reference position. The sound source tracker may be a digital compass, gyroscope, beacon positioning system or headset worn by a listener. The processor selects the at least one sound source based on an analysis of the direction and position data. A control interaction, such as a user input, from at least one controller, is used to process at least one audio signal associated with the selected sound source. The signal may be filtered, equalized, delayed, mixed or the gain may be adjusted. The processed signal is output to be rendered by speakers or headphones. The apparatus may be used in a live audio mixing environment where a sound engineer is located away from a reference spatial microphone array. Claims related to selecting the audio sources based on a predetermined gesture from the sound source tracker are also disclosed.

    Causing provision of virtual reality content

    公开(公告)号:GB2545275A

    公开(公告)日:2017-06-14

    申请号:GB201521917

    申请日:2015-12-11

    Abstract: Virtual or augmented reality (VR) content is provided to a user via portable equipment located at a first location L1-1 and having a first orientation O1-1, the VR content being associated with a second location L2 and a second orientation O2. The VR content is rendered for provision in dependence on the first location relative to the second location (X1-1) and the first orientation relative to the second orientation (θ1-1). The second location and orientation can be a fixed geographic point or the position of a second portable user equipment for providing a second version of the VR content. In the latter case, if the second user equipment is within the virtual field of view of the first user, content representing the second user is provided to the first user. The VR content may be derived from plural items captured by dedicated devices arranged in a two or three-dimensional array, and may comprise a portion of a cylindrical panorama. The virtual content may comprise audio content with plural sub-components which may appear to come from a single point source if the virtual distance from the user is above a threshold.

    Ambience generation for spatial audio mixing featuring use of original and extended signal

    公开(公告)号:GB2561595A

    公开(公告)日:2018-10-24

    申请号:GB201706289

    申请日:2017-04-20

    Abstract: An apparatus for generating at least one audio signal associated with a sound scene is configured to receive at least one audio signal and analyse 101 the audio signal(s) to determine at least one attribute parameter such as peakiness, impulsiveness and voice activity. At least one control signal is determined based on the at least one attribute and is then used to generate a spatially extended audio signal from the at least one audio signal. The initial audio signal(s) and the spatially extended audio signal(s) are then mixed 121 in a proportion based on the control signal, to generate at least one audio signal associated with the sound scene where the timbre of the original audio signal is preserved. The spatial extension may comprise applying at least one of: a vector base amplitude panning; a direct binaural panning, a direct assignment to channel output location; synthesized ambisonics; or wavefield synthesis. The technique is particularly suited to synthesis of sound objects containing percussive or impulsive sounds, and speech. The apparatus may be used in virtual reality or computer game applications.

Patent Agency Ranking