Abstract:
Methods and systems to improve a visual perception of dark scenes in video. An example device includes one or more processors to receive a frame of video segmented into a plurality of sub-regions. A local luminance histogram is generated for each sub-region. A global luminance histogram is generated for the entire frame of video and a global tone mapping curve is generated based on the global luminance histogram. A tone mapping LUT is generated for each sub-region based on the global tone mapping curve and the corresponding local luminance histogram for the sub-region. The frame of video is then modified using the tone mapping LUTs generated for each sub-region and sent to an output device.
Abstract:
Techniques related to accelerated video enhancement using deep learning selectively applied based on video codec information are discussed. Such techniques include applying a deep learning video enhancement network selectively to decoded non-skip blocks that are in low quantization parameter frames, bypassing the deep learning network for decoded skip blocks in low quantization parameter frames, and applying non-deep learning video enhancement to high quantization parameter frames.
Abstract:
Systems, apparatuses and methods may provide for technology to improve user experience when viewing simulated 3D objects on a display. Head and upper-body movements may be tracked and recognized as gestures to alter the displayed viewing angle. The technology provides for a very natural way to look around, under, or over objects.
Abstract:
Methods, apparatus, systems, and articles of manufacture are disclosed to improve video encoding. An example apparatus includes at least one memory, instructions, and processor circuitry to generate a pool of clipping index set candidates by executing a machine learning model, select a clipping index set from the pool of the clipping index set candidates based on a rate distortion cost associated with the clipping index set, the clipping index set including clipping coefficients, and filter a video frame based on the clipping coefficients.
Abstract:
Systems, apparatuses and methods may include technology to bundle on demand video frames together in clusters having similar encode times based on predicted performance determined by weighted heuristics.
Abstract:
An apparatus for edge aware upscaling is described herein. The apparatus comprises a potential edge detector, a thin-edge detector, a one-directional edge detector, a correlation detector, and a corrector. The potential edge detector identifies potential edge pixels in an input image, and the thin-edge detector detects thin edges in the potential edge pixels of the input image. The one-directional edge detector detects one-directional edges in the potential edge pixels of the input image, and the correlation detector detects strongly correlated edges in the potential edge pixels of the input image. The corrector derives a target output value based on an edge type and classification of a corresponding input pixel as identified by a source map point.