Abstract:
Range adaptive, dynamic metadata generation for high-dynamic images includes generating, using computer hardware, histogram-based data for video including one or more frames. The histogram-based data is generated for each of a plurality of dynamic ranges. For each dynamic range, a predetermined amount of dynamic metadata for the video is generated from the histogram-based data for the dynamic range. The video and the dynamic metadata is output.
Abstract:
One embodiment provides a computer-implemented method that includes receiving region information from a stationary region detection process for a video. A processor performs a flat region ghosting artifact removal process that updates the region information with a flat region indicator utilizing the region information and the video. The processor further performs a region based luminance reduction process utilizing the updated region information with the flat region indicator for display ghosting artifact removal and burn-in protection.
Abstract:
One embodiment provides a computer-implemented method that includes adaptively adjusting a detection time interval based on stationary region type of one or more stationary regions and a scene length in a video. The method further includes tracking pixels of the one or more stationary regions from a number of previous frames to a current frame in the video in real-time. A minimum and a maximum of max-Red-Green-Blue (MaxRGB) pixel values are extracted from each frame in a scene of the video as minimum and a maximum temporal feature maps for representing pixel variance over time. Segmentation and block matching are applied on the minimum and maximum temporal feature maps to detect the stationary region type.
Abstract:
One embodiment provides a method comprising receiving an input content, and receiving ambient contextual data indicative of one or more ambient lighting conditions of an environment including a display device. The input content has corresponding metadata that at least partially represents a creative intent indicative of how the input content is intended to be viewed. The method further comprises adaptively correcting the input content based on the ambient contextual data to preserve the creative intent, and providing the corrected input content to the display device for presentation. The adaptively correcting comprises applying automatic white balancing to the input content to correct color tone of the input content.
Abstract:
A method, apparatus, and non-transitory computer readable medium for video tone mapping. The method includes receiving the video and determining parameters of a tone mapping function defined by a Bezier curve for processing the video. The method also includes generating, by at least one processor, a tone mapped video by applying the tone mapping function to the video using the determined parameters.
Abstract:
A method, apparatus, and non-transitory computer readable medium for video tone mapping. The method includes receiving the video and determining parameters of a tone mapping function defined by a Bezier curve for processing the video. The method also includes generating, by at least one processor, a tone mapped video by applying the tone mapping function to the video using the determined parameters.
Abstract:
A method includes modeling, by a computing device, color saturation variations of different hues in a working color space with one or more properties of responses of a human vision system (HVS) to color stimulus. The computing device further generates one or more color saturation variation models based on the responses of the HVS to color stimulus.
Abstract:
One embodiment provides a method comprising determining multi-dimensional metadata corresponding to an input image, and determining ambient light information indicative of a level of ambient light in an ambient environment of a display device. The multi-dimensional metadata comprises a cumulative distribution function (CDF) of pixels in the input image. The method further comprises determining, based on the multi-dimensional metadata and the ambient light information, one or more gains that adaptively compensate for the level of ambient light in the ambient environment. The method further comprises generating a tone mapping function based on the one or more gains, and applying the tone mapping function to the input image to generate a tone-mapped image that adaptively compensates for the level of ambient light in the ambient environment. The tone-mapped image is provided to the display device for presentation.
Abstract:
A method includes receiving a quantized video having a first maximum luminance level or brightness associated with a mastering display with which the quantized video was mastered. The method also includes de-quantizing the quantized video in order to generate a de-quantized video. The method further includes applying a scene-adaptive tone mapping function to the de-quantized video in order to generate a tone mapped video. In addition, the method includes displaying the tone mapped video on a target display. The tone mapped video has a second maximum luminance level or brightness associated with the target display, and the second maximum luminance level or brightness is less than the first maximum luminance level or brightness.
Abstract:
One embodiment provides a computer-implemented method that includes providing a dynamic list structure that stores one or more detected object bounding boxes. Temporal analysis is applied that updates the dynamic list structure with object validation to reduce temporal artifacts. A two-dimensional (2D) buffer is utilized to store a luminance reduction ratio of a whole video frame. The luminance reduction ratio is applied to each pixel in the whole video frame based on the 2D buffer. One or more spatial smoothing filters are applied to the 2D buffer to reduce a likelihood of one or more spatial artifacts occurring in a luminance reduced region.