Abstract:
An apparatus is disclosed, comprising at least one processor and at least one memory including computer program code, the at least one memory and the computer program code configured with the processor to cause the apparatus to perform the following steps. In a first step, the spatial position of a user in a real world space is detected. In another step, one or more audio signals are provided to the user representing audio from spatially-distributed audio sources in a virtual space. In another step, responsive to detecting movement of the user's spatial position from within a first zone to within a second zone within the real world space, the audio signals for selected ones of the audio sources are modified based on their spatial position in the virtual space.
Abstract:
An apparatus comprising: an input configured to receive at least two groups of at least two audio signals; a first audio former configured to generate a first formed audio signal from a first of the at least two groups of at least two audio signals; a second audio former configured to generate a second formed audio signal from the second of the at least two groups of at least two audio signals; an audio analyser configured to analyse the first formed audio signal and the second formed audio signal to determine at least one audio source and an associated audio source signal; and an audio signal synthesiser configured to generate at least one output audio signal based on the at least one audio source and the associated audio source signal.
Abstract:
Un método que comprende: determinar una localización de navegación interactiva; provocar la visualización de una imagen en perspectiva (300, 310, 320, 500, 510, 520) que representa una vista de una porción de información de mapa de la localización de navegación interactiva; provocar la visualización de un indicador de región de vídeo (502, 512, 522) que indica que el contenido de vídeo, que es contenido de vídeo panorámico y el contenido de vídeo es capturado por una pluralidad de módulos de cámara de un aparato de captura de vídeo (402), está disponible en una localización de navegación de vídeo, representando el contenido de vídeo una vista de una porción de la información de mapa de la localización de navegación de vídeo; determinar que la localización de navegación interactiva ha cambiado tal que la localización de navegación interactiva corresponde a la localización de navegación de vídeo; y provocar la representación del contenido de vídeo, de manera que un ángulo de visualización de la imagen en perspectiva corresponde al ángulo de visualización del contenido de vídeo panorámico, basándose, al menos en parte, en la localización de navegación interactiva que corresponde a la localización de navegación de vídeo.
Abstract:
Typically, cinemagraphs are still photographs in which a minor or repeated movement occurs. The invention relates to providing audio processing functionality for cinemagraphs or visual animations. The invention comprises means for carrying out the steps of: analysing at least two images to determine at least one region common to the at least two images; determining at least one parameter associated with a motion of at least one region; determining at least one playback signal, such as an audio file, to be associated with the at least one region; and processing the at least one playback signal based on the at least one parameter. The invention allows a user easily and without significant skill to generate a cinemagraph with audio or tactile effects. Thus a user may select a region of interest 203, select an audio file 205 and select a frame for synchronizing with beat 209. The effects may be generated and embedded as metadata including audio effects signals or links to such signals.
Abstract:
A method comprising causing display of a first visual information that is a view from a first geographical location, receiving, by the apparatus, an indication of availability of a second visual information that is a view from a second geographical location, the second geographical location being in a first direction from the first geographical location, determining a position in the first visual information that corresponds with the first direction, and causing display of at least a portion of the second visual information such that the portion of the second visual information overlays the first visual information at the position in the first visual information is disclosed.
Abstract:
Un aparato para codificar una señal de audio configurado para: recibir una parte mayor de componentes de audio de una fuente de audio desde al menos un micrófono situado o dirigido hacia la fuente de audio; generar una primera señal de audio que comprende la parte mayor de los componentes de audio de la fuente de audio; recibir una parte menor de los componentes de audio de la fuente de audio desde al menos un micrófono adicional situado o dirigido lejos de la fuente de audio; y generar una segunda señal de audio que comprende la parte menor de los componentes de audio de la fuente de audio.
Abstract:
A method of frame error concealment in encoded audio data comprises receiving encoded audio data in a plurality of frames; and using saved one or more parameter values from one or more previous frames to reconstruct a frame with frame error. Using the saved one or more parameter values comprises deriving parameter values based at least part on the saved one or more parameter values and applying the derived values to the frame with frame error.