Abstract:
A method comprising: enabling a first user to define a message for display to at least a second user in association with a first three-dimensional scene viewed by the first user and viewed by or viewable by the second user, wherein the message comprises user-defined message content for display and message metadata, not for display, defining first three- dimensional spatial information; and enabling rendering of the user-defined message content in a second three-dimensional scene viewed by the second user, wherein the user-defined message content moves, within the second three-dimensional scene, along a three- dimensional trajectory dependent upon the first three-dimensional spatial information and three-dimensional spatial information of the second user.
Abstract:
An apparatus (e.g. mobile phone or Internet of Things -IoT- enabled device) is receiving layer weight parameters of a trained neural network (NN) and is using a subnetwork part e.g. 32 of the NN. The subnetwork has intermediate hidden layers 24, 25 which correspond to the weights of layers of the trained NN. The subnetwork further comprises an output layer (intermediate output 36). The intermediate output layer may be used as the output of the whole system in the event that the subsequent layers of the NN are missing. In this way a scalable neural network may be maintained even though some neural network data may be missing. The pre-training of the NN may take place on a server and be transmitted to the apparatus in a message sequence. Alternatively, the output layer may be trained on the apparatus from scratch or it may be fine-tuned from a base layer on the server.
Abstract:
The invention relates to a method and technical equipment for implementing the method. The method comprises segmenting an original image and a set of downsampled image with a set of parameters into regions; extracting feature vectors from each segmented region; classifying the feature vectors to provide a set of class labels indicating different materials; and performing a majority voting to choose the most frequently voted class as the material for a respective pixel.
Abstract:
An apparatus configured to, in respect of first and second virtual reality content each configured to provide imagery for a respective first (206) and second (203) virtual reality space for viewing in virtual reality; and based on first-user-viewing-experience information defining an appearance of an object of interest (205) that appears in the first virtual reality content (206) as viewed, in virtual reality, by a first user (201), and defining a time-variant point of view from which the first user (201) viewed the object of interest (205); providing for display to a second user (202), the second user (202) provided with a virtual reality view of the second virtual reality content (203), of imagery of the object of interest (204) superimposed into the virtual reality space of the second virtual reality content (203) such that the second user (202) is able, while viewing the second virtual reality content (203), to witness the object of interest (204) as it was viewed by the first user.
Abstract:
A method comprising: providing a first three-dimensional point cloud obtained according to a first sensing technique, a second 3D point cloud obtained according to a second sensing technique and a first radius of a sphere covering a real object underlying the second first 3D point cloud and a second radius of a sphere covering a real object underlying the second 3D point cloud; defining scales of the first 3D point cloud and the second 3D point cloud based on said first radius and second radius; searching statistically substantially similar candidate regions between the first 3D point cloud and the second 3D point cloud using an ensemble shape function (ESF) and aligning the statistically substantially similar candidate regions between the first 3D point cloud and the second 3D point cloud at least in vertical direction The aligned candidate regions are then aligned between the 3D point clouds based on their structural similarity.
Abstract:
The embodiments relate to a method and to technical equipment for implementing the method. The method includes receiving an image with location information from a client; requesting processed region data from a media server based on the location information, said processed region data including one or more images with the corresponding location information; applying a first process for determining a pose of a device for the received image by means of the processed region data; if the first process fails to result the pose of the device, applying a second process for determining a pose of a device for the received image by means of the processed region data; saving the image with the determined pose of the device to the media server; providing the image and the pose of the device to a client for client rendering.
Abstract:
An approach is provided for processing and/or facilitating a processing of one or more images to determine camera location information, camera pose information, or a combination thereof associated with at least one camera capturing the one or more images, wherein the camera location information, the camera pose information, or a combination thereof is represented according to a global coordinate system. The approach involves causing, at least in part, an association of the camera location information, the camera pose information, or a combination thereof with the one or more images as meta-data information.