Abstract:
Systems and methods for capturing omnistereo content for a mobile device may include receiving an indication to capture a plurality of images of a scene, capturing the plurality of images using a camera associated with a mobile device and displaying on a screen of the mobile device and during capture, a representation of the plurality of images and presenting a composite image that includes a target capture path and an indicator that provides alignment information corresponding to a source capture path associated with the mobile device during capture of the plurality of images. The system may detect that a portion of the source capture path does not match a target capture path. The system can provide an updated indicator in the screen that may include a prompt to a user of the mobile device to adjust the mobile device to align the source capture path with the target capture path.
Abstract:
Systems and methods are related to a camera rig and generating stereoscopic panoramas from captured images for display in a virtual reality (VR) environment.
Abstract:
Systems and methods are described for defining a set of images based on captured images, receiving a viewing direction associated with a user of a virtual reality (VR) head mounted display, receiving an indication of a change in the viewing direction. The methods further include configuring, a re-projection of a portion of the set of images, the re-projection based at least in part on the changed viewing direction and a field of view associated with the captured images, and converting the portion from a spherical perspective projection into a planar perspective projection, rendering by the computing device and for display in the VR head mounted display, an updated view based on the re-projection, the updated view configured to correct distortion and provide stereo parallax in the portion, and providing, to the head mounted display, the updated view including a stereo panoramic scene corresponding to the changed viewing direction.
Abstract:
Aspects of the disclosure relate generally to providing a user with an image navigation experience. In order to do so, a reference image may be identified. A set of potential target images for the reference image may also be identified. A drag vector for user input relative to the reference image is determined. For particular image of the set of target images an associated cost is determined based at least in part on a cost function and the drag vector. A target image is selected based on the determined associated costs.
Abstract:
An exemplary method includes prompting a user to capture video data at a location. The location is associated with navigation directions for the user. Information representing visual orientation and positioning information associated with the captured video data is received by one or more computing devices, and a stored data model representing a 3D geometry depicting objects associated with the location is accessed. Between corresponding images from the captured video data and projections of the 3D geometry, one or more candidate change regions are detected. Each candidate change region indicates an area of visual difference between the captured video data and projections. When it is detected that a count of the one or more candidate change regions is below a threshold, the stored model data is updated with at least part of the captured video data based on the visual orientation and positioning information associated with the captured video data.