Abstract:
A method of generating data for controlling a rendering system(9) includes obtaining data representative of a recording of at least intervals of an event, the recording having at least two components(22,23) obtainable through different respective modalities. The data is analyzed to determine at least a dependency between a first and a second of the components(22,23). At least the dependency is used to provide settings(30) for a system(9) 5 for rendering in perceptible form at least one output through a first modality in dependence on at least the settings and on at least one signal for rendering in perceptible form through a second modality
Abstract:
The invention relates to a method, a device and a computer program product for tracking a movement of an object or of a person. Tracking the movement of a person or an object by means of electronic video frames is conventional but fails if the person of object experience a sudden significant change in its translational velocity. The suggested method comprises a first step of grabbing a sequence of digital video frames and thereby capturing the object or person. At the same time measurement values of a parameter are obtained, said measurement values being indicative for the movement of the object or person being tracked by the digital video frames. In the next step the video frames are processed by means of processing logic whereby the processing logic uses a block matching algorithm, said block matching algorithm defining a pixel block in a frame and searching for this pixel block within a search area within a next frame and whereby the location of the search area within the next frame is dynamically adapted on the basis of the measurement values. The invention provides the advantage that an electronic processing of digital video frames by means of a block matching algorithm can be carried out even in those cases in which there are large changes in the velocity of the tracked object or person.
Abstract:
A method of rendering views for a multi-view display device (100) is disclosed. The multi-view display device (100) comprises a number of display means (104, 110) for displaying respective views in mutually different directions relative to the multi- view display device (100). The method comprises: computing a first motion vector field on basis of a first input image of a time sequence of input images and a second input image of the time sequence of input images; computing a first motion compensated intermediate image on basis of the first motion vector field, the first input image and/or the second input image; and providing the first motion compensated intermediate image to a first one of the number of display means (104, 110).
Abstract:
A motion estimation unit for estimating a motion vector for a group of pixels of an image of a series of images, comprises: generating means for generating a set of motion vector candidates for the group of pixels; matching means for calculating match errors for the respective motion vector candidates of the set; selecting means for selecting a first one of the motion vector candidates as the motion vector for the group of pixels, on basis of the match errors; and testing means for testing whether the group of pixels has to be split into sub-groups of pixels for which respective further motion vectors have to be estimated, similar to the motion vector being estimated for the group of pixels, the testing based on a measure related to a particular motion vector.
Abstract:
An audio stream (14) and video stream (12) from a conventional audiovisual source (10) are processed by processor (20). A motion processor (30) establishes at least one motion feature and outputs it to the stimulus controller (32) which generates a stimulus in stimulus generator (34). The stimulus generator (34) may be a Galvanic Vestibular Stimulus generator.
Abstract:
A segmentation system (100) for segmenting a first image feature (214) in a first video image from an adjacent second image feature (216) in the first video image on basis of motion and on an image property like color or luminance. The segmentation system (100) comprises; a block-based motion estimator (102) for estimating motion vectors (218-230) for blocks of pixels; a motion segmentation unit (104) for segmenting the first video image into a first group of connected blocks of pixels (204) and a second group of connected blocks of pixels (206) on basis of the motion vectors (218-230) of the respective blocks of pixels; and a pixel-based segmentation unit (106) for segmenting the first image feature (214) from the second image feature (214) by means of a pixel-based segmentation of a portion of the blocks of pixels of the first and second group of connected blocks of pixels (214 and 216), on basis of the respective values of the image property.
Abstract:
A method of rendering views for a multi-view display device (100) is disclosed. The multi-view display device (100) comprises a number of display means (104, 110) for displaying respective views in mutually different directions relative to the multi- view display device (100). The method comprises: computing a first motion vector field on basis of a first input image of a time sequence of input images and a second input image of the time sequence of input images; computing a first motion compensated intermediate image on basis of the first motion vector field, the first input image and/or the second input image; and providing the first motion compensated intermediate image to a first one of the number of display means (104, 110).
Abstract:
The present invention relates to a system and method of analyzing the movement of a user (4) . More particularly the present invention relates to a new technique for assessing a user's motor functions. It is an object of the present invention to provide a simple, robust, and low-cost technique for analyzing the movement of a user, which can be used in an unsupervised home environment. This object is achieved according to the invention by a method of analyzing the movement of a user, the method comprising the steps of causing the user to perform a coordinated movement in accordance with an instruction, generating video image data in form of a sequence of images by video recording the user, determining in the sequence of images a degree of synchronicity of optical flow on the left (14) and right body half (15) using a computer system (11) comprising computer vision technology, and assessing the user' s motor functions based on the degree of synchronicity.
Abstract:
The present invention relates to a system and method of analyzing the movement of a user. More particularly the present invention relates to a new technique for assessing a user's motor functions. It is an object of the present invention to provide a simple, robust, and low-cost technique for analyzing the movement of a user, which can be used in an unsupervised home environment. This object is achieved according to the invention by a method of analyzing the movement of a user, the method comprising the steps of causing the user to perform a coordinated movement in accordance with an instruction, generating video image data in form of a sequence of images by video recording the user, determining in the sequence of images a degree of synchronicity of optical flow on the left and right body half using a computer system comprising computer vision technology, and assessing the user's motor functions based on the degree of synchronicity.
Abstract:
A method of combined exchange of image data and related depth data is disclosed. The method comprises: converting an input image signal, representing the image data, comprising a predetermined number of input color components (R,G,B) into an output image signal comprising a luminance component and a chrominance component; combining the output signal with the related depth data into a combined signal, comprising the luminance component, the chrominance component and a depth component (D) which is based on the depth data; and transmission of the combined signal over a number of channels (108-112) which is equal to the predetermined number of input color components (R,G,B).