Abstract:
Videoverarbeitungstechniken und -pipelines, welche die Erfassung, Verbreitung und Anzeige von Hochdynamikbereich-Bilddaten (HDR-Bilddaten) an sowohl HDR-fähige(n) Anzeigevorrichtungen als auch Anzeigevorrichtungen unterstützen, die keine HDR-Bildverarbeitung unterstützen. Eine Sensor-Pipeline (210) kann Standarddynamikbereich-Daten (SDR-Daten) (230A) unter Verwendung einer Tonabbildung (226), zum Beispiel lokaler Tonabbildung, aus HDR-Daten erzeugen, die durch einen Sensor erfasst wurden. Informationen, die verwendet werden, um die SDR-Daten zu erzeugen, können einer Anzeige-Pipeline (260) als Metadaten (230B) mit den erzeugten SDR-Daten (230B) bereitgestellt werden. Wenn eine Zielanzeige HDR-Bildverarbeitung nicht unterstützt, können die SDR-Bilddaten durch die Anzeige-Pipeline direkt gerendert werden. Wenn die Zielanzeige HDR-Bildverarbeitung unterstützt, kann eine inverse Abbildungstechnik (276) gemäß den Metadaten auf die SDR-Bilddaten angewandt werden, um HDR-Bilddaten für die Anzeige zu rendern. Informationen, die beim Durchführen einer Farbskala-Abbildung verwendet werden, können ebenfalls in den Metadaten bereitgestellt und verwendet werden, um abgeschnittene Farben für die Anzeige wiederherzustellen.
Abstract:
Eine Blockverarbeitungspipeline, die eine Softwarepipeline und eine Hardwarepipeline einschließt, die parallel ausgeführt werden. Die Softwarepipeline wird der Hardwarepipeline mindestens einen Block voraus ausgeführt. Die Stufen der Pipeline können jeweils eine Hardwarepipelinekomponente aufweisen, die eine oder mehrere Operationen an einem aktuellen Block in der Stufe ausführt. Mindestens eine Stufe der Pipeline kann zudem eine Softwarepipelinekomponente einschließen, die eine Konfiguration für die Hardwarekomponente in der Stufe der Pipeline zum Verarbeiten eines nächsten Blocks ermittelt, während die Hardwarekomponente der aktuellen Block verarbeitet. Die Softwarepipelinekomponente kann die Konfiguration gemäß Informationen bezüglich des nächsten Blocks verarbeiten, die aus einer vorgeschalteten Stufe der Pipeline erhalten werden. Die Softwarepipelinekomponente kann zudem Informationen bezüglich eines Blocks erhalten, der zuvor in der Stufe verarbeitet wurde.
Abstract:
A method of signaling additional chroma QP offset values that are specific to quantization groups is provided, in which each quantization group explicitly specifies its own set of chroma QP offset values. Alternatively, a table of possible sets of chroma QP offset values is specified in the header area of the picture, and each quantization group uses an index to select an entry from the table for determining its own set of chroma QP offset values. The quantization group specific chroma QP offset values are then used to determine the chroma QP values for blocks within the quantization group in addition to chroma QP offset values already specified for higher levels of the video coding hierarchy.
Abstract:
Techniques are provided for determining an optimal focal position using auto-focus statistics. In one embodiment, such techniques may include generating coarse and fine auto-focus scores for determining an optimal focal length at which to position a lens 88 associated with the image sensor 90. For instance, the statistics logic 680 may determine a coarse position that indicates an optimal focus area which, in one embodiment, may be determined by searching for the first coarse position in which a coarse auto-focus score decreases with respect to a coarse auto-focus score at a previous position. Using this position as a starting point for fine score searching, the optimal focal position may be determined by searching for a peak in fine auto-focus scores. In another embodiment, auto-focus statistics may also be determined based on each color of the Bayer RGB, such that, even in the presence of chromatic aberrations, relative auto-focus scores for each color may be used to determine the direction of focus.
Abstract:
Various techniques are provided for the detection and correction of defective pixels in an image sensor 90. In accordance with one embodiment, a static defect table storing the locations of known static defects is provided, and the location of a current pixel is compared to the static defect table. If the location of the current pixel is found in the static defect table, the current pixel is identified as a static defect and is corrected using the value of the previous pixel of the same color. If the current pixel is not identified as a static defect, a dynamic defect detection process 444 includes comparing pixel-to-pixel gradients between the current pixel a set of neighboring pixels against a dynamic defect threshold. If a dynamic defect is detected, a replacement value for correcting the dynamic defect may be determined by interpolating the value of two neighboring pixels on opposite sides of the current pixel in a direction exhibiting the smallest gradient.
Abstract:
Ciertos aspectos de esta divulgación se refieren a un sistema de procesamiento de señal de imagen 32 que incluye un controlador de flash 550 que está configurado para activar dispositivo de flash antes del inicio de un cuadro de imagen objetivo al utilizar una señal de temporización del sensor. En una modalidad, el controlador de flash 550 recibe una señal de temporización del sensor retrasada determina un tiempo de inicio de activación de flash al utilizar la señal de temporización del sensor retrasada para identificar un tiempo correspondiente al final del cuadro previo, aumentando ese tiempo por un tiempo de borrado vertical, y después restando un primer desfase para compensar el retraso entre la señal de temporización del sensor y la señal de temporización del sensor retrasada. Después, el controlador de flash 550 resta un segundo desfase para determinar el tiempo de activación del flash, asegurando de esta manera que el flash se activa antes de recibir el primer pixel del cuadro objetivo.
Abstract:
Certain aspects of this disclosure relate to an image signal processing system 32 that includes a flash controller 550 that is configured to activate a flash device prior to the start of a target image frame by using a sensor timing signal. In one embodiment, the flash controller 550 receives a delayed sensor timing signal and determines a flash activation start time by using the delayed sensor timing signal to identify a time corresponding to the end of the previous frame, increasing that time by a vertical blanking time, and then subtracting a first offset to compensate for delay between the sensor timing signal and the delayed sensor timing signal. Then, the flash controller 550 subtracts a second offset to determine the flash activation time, thus ensuring that the flash is activated prior to receiving the first pixel of the target frame.
Abstract:
Various techniques are disclosed for processing statistics data in an image signal processor (ISP). In one embodiment, a statistics collection engine may be configured to acquire statistics relating to auto white -balance. The statistics collection engine may receive raw data acquired by an image sensor and may be configured to perform one or more color space conversions to obtain pixel data in other color spaces. A set of pixel filters may be configured to accumulate sums of the pixel data conditionally based upon YC1C2 characteristics, as defined by a pixel condition per pixel filter. Depending on a selected color space, the pixel filters may generate color sums, which may be used to match a current illuminant against a set of reference illuminants with which the image sensor has been previously calibrated.
Abstract:
A method of signaling additional chroma QP offset values that are specific to quantization groups is provided, in which each quantization group explicitly specifies its own set of chroma QP offset values. Alternatively, a table of possible sets of chroma QP offset values is specified in the header area of the picture, and each quantization group uses an index to select an entry from the table for determining its own set of chroma QP offset values. The quantization group specific chroma QP offset values are then used to determine the chroma QP values for blocks within the quantization group in addition to chroma QP offset values already specified for higher levels of the video coding hierarchy.
Abstract:
Disclosed embodiments provide for a an image signal processing system 32 that includes back-end pixel processing unit 120 that receives pixel data after being processed by at least one of a front-end pixel processing unit 80 and a pixel processing pipeline 82. In certain embodiments, the back-end processing unit 120 receives luma/chroma image data and may be configured to apply face detection operations, local tone mapping, bright, contrast, color adjustments, as well as scaling. Further, the back-end processing unit 120 may also include a back-end statistics unit 2208 that may collect frequency statistics. The frequency statistics may be provided to an encoder 118 and may be used to determine quantization parameters that are to be applied to an image frame.