Abstract:
The locations of pixels in a frame [220] are adjusted at a display controller [104] after the frame has been generated by a graphics processing unit (GPU) [102]or other processor and provided to the display controller. The adjusting of the pixel locations therefore occurs as close as possible to a display panel in a display system, thereby supporting rapid changes to pixel positions.
Abstract:
An apparatus includes at least one camera and a refractor configured to change a direction of a light toward a lens of the camera. The refractor has a shape configured to redirect light representing an image at two focal points, the two focal points are on a same plane and at a same distance from the at least one camera, and the refractor is positioned between the two focal points and the at least one camera.
Abstract:
A method (400) includes obtaining (410), at one or more computing devices, an input image; determining (420) a first value of a quality metric for the input image; generating (430) a first chroma subsampled representation of the input image; and generating (440) a reconstructed image based on the chroma subsampled representation. The method also includes determining a second value of the quality metric for the reconstructed image; determining an error value based on the first value of the quality metric and the second value of the quality metric; and generating a second chroma subsampled representation of the input image based in part on the error value.
Abstract:
A method includes receiving an indication of a field of view associated with a three- dimensional (3D) image being displayed on a head mount display (HMD), receiving an indication of a depth of view associated with the 3D image being displayed on the HMD, selecting a first right eye image and a second right eye image based on the field of view, combining the first right eye image and the second right eye image based on the depth of view, selecting a first left eye image and a second left eye image based on the field of view, and combining the first left eye image and the second left eye image based on the depth of view.
Abstract:
Systems and methods are described for capturing spherical content. The systems and methods can include determining a region within a plurality of images captured with a plurality of cameras in which to transform two-dimensional data into three-dimensional data, calculating a depth value for a portion of pixels in the region, generating a spherical image, the spherical image including image data for the portion of pixels in the region, constructing, using the image data, a three-dimensional surface in three-dimensional space of a computer graphics object generated by an image processing system, generating, using the image data, a texture mapping to a surface of the computer graphics object, and transmitting the spherical image and the texture mapping for display in a head-mounted display device.
Abstract:
A method for encoding a spherical video is disclosed. The method includes mapping a frame of the spherical video to a two dimensional representation based on a projection. Further, in a prediction process the method includes determine whether at least one block associated with a prediction scheme is on a boundary of the two dimensional representation, and upon determining the at least one block associated with the prediction scheme is on the boundary, select an adjacent end block as a block including at least one pixel for use during the prediction process, the adjacent end block being associated with two or more boundaries of the two dimensional representation.
Abstract:
Processing an image having a first bit depth includes performing two or more iterations of a bit depth enhancement operation that increases the bit depth of the image to a second bit depth that is higher than the first bit depth. The bit depth enhancement operation includes dividing the image into a plurality of areas, performing an edge detection operation to identify one or more areas from the plurality of areas that do not contain edge features, and applying a blur to the one or more areas from the plurality of areas that do not contain edge features. In a first iteration of the of the bit depth enhancement operation, the plurality of areas includes a first number of areas, and the number of areas included in the plurality of areas decreases with each subsequent iteration of the bit depth enhancement operation.
Abstract:
A method for encoding a spherical video is disclosed. The method includes mapping a frame of the spherical video to a two dimensional representation based on a projection. Further, in a prediction process the method includes determine whether at least one block associated with a prediction scheme is on a boundary of the two dimensional representation, and upon determining the at least one block associated with the prediction scheme is on the boundary, select an adjacent end block as a block including at least one pixel for use during the prediction process, the adjacent end block being associated with two or more boundaries of the two dimensional representation.
Abstract:
A method (400) includes obtaining (410), at one or more computing devices, an input image; determining (420) a first value of a quality metric for the input image; generating (430) a first chroma subsampled representation of the input image; and generating (440) a reconstructed image based on the chroma subsampled representation. The method also includes determining a second value of the quality metric for the reconstructed image; determining an error value based on the first value of the quality metric and the second value of the quality metric; and generating a second chroma subsampled representation of the input image based in part on the error value.
Abstract:
Processing an image having a first bit depth includes performing two or more iterations of a bit depth enhancement operation that increases the bit depth of the image to a second bit depth that is higher than the first bit depth. The bit depth enhancement operation includes dividing the image into a plurality of areas, performing an edge detection operation to identify one or more areas from the plurality of areas that do not contain edge features, and applying a blur to the one or more areas from the plurality of areas that do not contain edge features. In a first iteration of the of the bit depth enhancement operation, the plurality of areas includes a first number of areas, and the number of areas included in the plurality of areas decreases with each subsequent iteration of the bit depth enhancement operation.