Abstract:
Un método que comprende: colocar uno o más drones en un espacio del escenario monitorizado para definir, al menos parcialmente, la forma y la posición de un límite virtual implementado por ordenador en un espacio de monitorización correspondiente; causar la implementación del límite virtual implementado por ordenador en el espacio de monitorización correspondiente al espacio del escenario monitorizado, y procesar los datos recibidos desde los sensores del espacio del escenario monitorizado para generar un episodio de respuesta, que responde a un cambio en al menos una parte del espacio del escenario monitorizado con respecto al límite virtual implementado por ordenador en el espacio de monitorización correspondiente.
Abstract:
A method comprising: creating a visual indicator based on at least one of visual analysis or audio analysis performed for a content comprising at least one visual element, wherein the visual indicator is selectable such that upon a selection of the visual indicator, access to the content is provided.
Abstract:
Un método que comprende: determinar, basándose en una medida determinada del éxito de una separación de una señal de audio que representa una fuente sonora de una señal de audio compuesta que comprende componentes derivados de al menos dos fuentes sonoras, un valor de un parámetro de modificación de la señal separada, indicando el valor del parámetro de modificación de la señal separada un intervalo de modificación de una característica asociada a la señal de audio separada.
Abstract:
Certain examples of the present invention relate to a method, apparatus, system and computer program for controlling a positioning module and/or an audio capture module. Certain examples provide a method (100) comprising: associating (101) one or more positioning modules (501) with one or more audio capture modules (502); and controlling (102) one or more operations of the one or more positioning modules (501) and/or the associated one or more audio capture modules (502) in dependence upon: one or more pre-determined times (202(a)), and one or more pre-determined positions (202(b)).
Abstract:
A method comprising: remotely sensing a real acoustic environment, in which multiple audio signals are captured; and enabling automatic control of mixing of the multiple captured audio signals based on the remote sensing of the real acoustic environment in which the multiple audio signals were captured.
Abstract:
An apparatus configured to: in respect of virtual reality content comprising video imagery configured to provide a virtual reality space for viewing in virtual reality, wherein a virtual reality view presented to a user provides for viewing of the virtual reality content, the virtual reality view comprising a spatial portion of the video imagery that forms the virtual reality space and being smaller in spatial extent than the spatial extent of the video imagery of the virtual reality space and based on one or more of; i) a viewing direction in the virtual reality space of at least one virtual reality view provided to the user; and ii) a selected object in the video imagery, providing for one or more of generation or display of causal summary content comprising selected content from the virtual reality content at least prior to a time point in the virtual reality content currently viewed by the user, the causal summary content at least focussed on an object or event appearing in the at least one virtual reality view or the selected object to show the historic occurrence of the object or event or selected object in the virtual reality content.
Abstract:
A method comprising: automatically applying a selection criterion or criteria to a sound object; if the sound object satisfies the selection criterion or criteria then performing one of correct or incorrect rendering of the sound object; and if the sound object does not satisfy the selection criterion or criteria then performing the other of correct or incorrect rendering of the sound object, wherein correct rendering of the sound object comprises at least rendering the sound object at a correct position within a rendered sound scene compared to a recorded sound scene and wherein incorrect rendering of the sound object comprises at least rendering of the sound object at an incorrect position in a rendered sound scene compared to a recorded sound scene or not rendering the sound object in the rendered sound scene.