Abstract:
A data processing apparatus at a transmitter end has an output interface and a display controller. The output interface packs a compressed display data into an output bitstream, and outputs the output bitstream via a display interface. The display controller refers to a compression characteristic of the compressed display data to configure a transmission setting of the output interface over the display interface (e.g., number of data lines, operating frequency of each data line, and/or behavior in the blanking period). A data processing apparatus at a receiver end has an input interface and a controller. The input interface receives an input bitstream via a display interface, and un-packs the input bitstream into a compressed display data that is transmitted over the display interface. The controller configures a reception setting of the input interface over the display interface in response to a compression characteristic of the compressed display data.
Abstract:
A method and apparatus for applying DF processing and SAO processing to reconstructed video data are disclosed. The DF processing is applied to a current access element of reconstructed video data to generate DF output data and the deblocking status is determined while applying the DF processing. Status-dependent SAO processing is applied to one or more pixels of the DF output data according to the deblocking status. The status-dependent SAO processing comprises SAO processing, partial SAO processing, and no SAO processing. The SAO starting time for SAO processing is between the DF-output starting time and ending time for the current block. The DF starting time of a next block can be earlier than the SAO ending time of the current block by a period oft, where t is smaller than time difference between the DF-output starting time and the DF starting time of the next block.
Abstract:
An exemplary decoding method of an input video bitstream including a first bitstream and a second bitstream includes: decoding a first picture in the first bitstream; after a required decoded data derived from decoding the first picture is ready for a first decoding operation of a second picture in the first bitstream, performing the first decoding operation; and after a required decoded data derived from decoding the first picture is ready for a second decoding operation of a picture in the second bitstream, performing the second decoding operation, wherein a time period of decoding the second picture in the first bitstream and a time period of decoding the picture in the second bitstream are overlapped in time.
Abstract:
An image resizing method includes at least the following steps: receiving at least one input image; performing an image content analysis upon at least one image selected from the at least one input image to obtain an image content analysis result; and creating a target image with a target image resolution by scaling the at least one input image according to the image content analysis result, wherein the target image resolution is different from an image resolution of the at least one input image.
Abstract:
A method of operating an imaging apparatus includes generating combined sensor settings for a plurality of image sensors by an image signal processor (ISP), and transmitting the combined sensor settings from the ISP in a single operation. The ISP includes a single software flow control for the plurality of image sensors. The method significantly optimizes CPU usage and reduces power consumption in multi-sensor camera systems. By consolidating multiple software and data control flows into a single, unified process, the system achieves a substantial reduction in computational overhead.
Abstract:
An image adjustment method, applied to an image sensing system comprising an image sensor, comprising: (a) sensing a target image by the image sensor; (b) dividing the target image to a plurality of image regions; (c) acquiring location information of at least one first target feature in the image regions; (d) computing brightness information of each of the image regions; (e) generating adjustment curves according to the brightness information and according to required brightness values of each of the image regions; and (f) adjusting brightness values of the image regions according to the adjustment curves. The step (d) adjusts the brightness information according to the location information or the step (e) adjusts the adjustment curves according to the location information.
Abstract:
A method for tuning a plurality of image signal processor (ISP) parameters of a camera includes performing a first iteration. The first iteration includes extracting image features from an initial image, arranging a tuning order of the plurality of ISP parameters of the camera according to at least the plurality of ISP parameters and the image features, tuning a first set of the ISP parameters according to the tuning order to generate a first tuned set of the ISP parameters, and replacing the first set of the ISP parameters with the first tuned set of the ISP parameters in the plurality of ISP parameters to generate a plurality of updated ISP parameters.
Abstract:
An image processing method is applied to an operation device and includes analyzing an unprocessed image to split the unprocessed image into a first region and a second region, applying a first image processing algorithm to the first region for acquiring a first processed result, applying a second image processing algorithm different from the first image processing algorithm to the second region for acquiring a second processed result, and generating a processed image via the first processed result and the second processed result.
Abstract:
An image enhancement method applied to an image enhancement apparatus and includes acquiring a first edge feature from a first spectral image and a second edge feature from a second spectral image, analyzing similarity between the first edge feature and the second edge feature to align the first spectral image with the second spectral image, acquiring at least one first detail feature from the first spectral image and at least one second detail feature from the second spectral image, comparing the first edge feature and the second edge feature to generate a first weight and a second weight, and fusing the first detail feature weighted by the first weight with the second detail feature weighted by the second weight to generate a fused image. The first spectral image and the second spectral image are captured at the same point of time.
Abstract:
A video encoding method includes: setting a 360-degree Virtual Reality (360 VR) projection layout of projection faces, wherein the projection faces have a plurality of triangular projection faces located at a plurality of positions in the 360 VR projection layout, respectively; encoding a frame having a 360-degree image content represented by the projection faces arranged in the 360 VR projection layout to generate a bitstream; and for each position included in at least a portion of the positions, signaling at least one syntax element via the bitstream, wherein the at least one syntax element is set to indicate at least one of an index of a triangular projection view filled into a corresponding triangular projection face located at the position and a rotation angle of content rotation applied to the triangular projection view filled into the corresponding triangular projection face located at the position.