Abstract:
A method performed by an electronic device is described. The method includes obtaining a combined image. The combined image includes a combination of images captured from one or more image sensors. The method also includes obtaining depth information. The depth information is based on a distance measurement between a depth sensor and at least one object in the combined image. The method further includes adjusting a combined image visualization based on the depth information.
Abstract:
The example techniques of this disclosure are directed to generating a stereoscopic view from an application designed to generate a mono view. For example, the techniques may modify source code of a vertex shader to cause the modified vertex shader, when executed, to generate graphics content for the images of the stereoscopic view. As another example, the techniques may modify a command that defines a viewport for the mono view to commands that define the viewports for the images of the stereoscopic view.
Abstract:
An apparatus includes an object detector configured to receive image data of a scene viewed from the apparatus and including an object. The image data is associated with multiple scale space representations of the scene. The object detector is configured to detect the object responsive to location data and a first scale space representation of the multiple scale space representations.