Abstract:
This invention provides methods for spatially localized image editing. For example, an input image is divided into multiple bins in each dimension. For each bin, a histogram is computed, along with local image statistics such as mean, medium and cumulative histogram. Next, for each tile, a type of adjustment is determined and applied, including adjustment associated with Exposure, Brightness, Shadows, Highlights, Contrast, and Blackpoint. The adjustments are done for all tiles in the input image to render a small adjustment image. The small image is then interpolated, for example, using an edge-preserving interpolation, to get a full size adjustment image with adjustment curve for each pixel. Subsequently, per-pixel image adjustments can be performed across an entire input image to render a final adjusted image.
Abstract:
The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.
Abstract:
Techniques are provided for encoding an extended image such that it is backwards compatible with existing decoding devices. An extended image format is defined such that the extended image format is consistent with an existing image format over the full range of the existing image format. Because the extended image format is consistent with the existing image format over the full range of the existing image format, additional image information that is included in an extended image can be extracted from the extended image. A base version of an image (expressed using the existing image format) may be encoded in a payload portion and the extracted additional information may be stored in a metadata portion of a widely supported image file format.
Abstract:
This disclosure pertains to apparatuses, methods, and computer readable medium for mapping particular user interactions, e.g., gestures, to the input parameters of various image filters, while simultaneously setting auto exposure, auto focus, auto white balance, and/or other image processing technique input parameters based on the appropriate underlying image sensor data in a way that provides a seamless, dynamic, and intuitive experience for both the user and the client application software developer. Such techniques may handle the processing of image filters applying location-based distortions as well as those image filters that do not apply location-based distortions to the captured image data. Additionally, techniques are provided for increasing the performance and efficiency of various image processing systems when employed in conjunction with image filters that do not require all of an image sensor's captured image data to produce their desired image filtering effects.
Abstract:
An automated RAW image processing method and system are disclosed. A RAW image and metadata related to the RAW image are obtained from a digital camera or other source. The RAW image and the related metadata are automatically processed using an Operating System service of a processing device to produce a resulting image in an absolute color space. When automatically processing, a predetermined tone reproduction curve is applied to the interpolate RAW image to produce the resulting image. The predetermined tone reproduction curve is derived from a plurality of reference images and is selected based on the metadata associated with the RAW image. The resulting image is then made available to an application program executing on the processing device through an application program interface with the Operating System service.
Abstract:
Techniques are provided for encoding an extended image such that it is backwards compatible with existing decoding devices. An extended image format is defined such that the extended image format is consistent with an existing image format over the full range of the existing image format. Because the extended image format is consistent with the existing image format over the full range of the existing image format, additional image information that is included in an extended image can be extracted from the extended image. A base version of an image (expressed using the existing image format) may be encoded in a payload portion and the extracted additional information may be stored in a metadata portion of a widely supported image file format.
Abstract:
The techniques disclosed herein use a compass, MEMS accelerometer, GPS module, and MEMS gyrometer to infer a frame of reference for a hand-held device. This can provide a true Frenet frame, i.e., X- and Y-vectors for the display, and also a Z-vector that points perpendicularly to the display. In fact, with various inertial clues from accelerometer, gyrometer, and other instruments that report their states in real time, it is possible to track the Frenet frame of the device in real time to provide a continuous 3D frame-of-reference. Once this continuous frame of reference is known, the position of a user's eyes may either be inferred or calculated directly by using a device's front-facing camera. With the position of the user's eyes and a continuous 3D frame-of-reference for the display, more realistic virtual 3D depictions of the objects on the device's display may be created and interacted with by the user.