Abstract:
Various techniques are provided for the detection and correction of defective pixels in an image sensor 90. In accordance with one embodiment, a static defect table storing the locations of known static defects is provided, and the location of a current pixel is compared to the static defect table. If the location of the current pixel is found in the static defect table, the current pixel is identified as a static defect and is corrected using the value of the previous pixel of the same color. If the current pixel is not identified as a static defect, a dynamic defect detection process 444 includes comparing pixel-to-pixel gradients between the current pixel a set of neighboring pixels against a dynamic defect threshold. If a dynamic defect is detected, a replacement value for correcting the dynamic defect may be determined by interpolating the value of two neighboring pixels on opposite sides of the current pixel in a direction exhibiting the smallest gradient.
Abstract:
Techniques are provided for determining an optimal focal position using auto-focus statistics. In one embodiment, such techniques may include generating coarse and fine auto-focus scores for determining an optimal focal length at which to position a lens 88 associated with the image sensor 90. For instance, the statistics logic 680 may determine a coarse position that indicates an optimal focus area which, in one embodiment, may be determined by searching for the first coarse position in which a coarse auto-focus score decreases with respect to a coarse auto-focus score at a previous position. Using this position as a starting point for fine score searching, the optimal focal position may be determined by searching for a peak in fine auto-focus scores. In another embodiment, auto-focus statistics may also be determined based on each color of the Bayer RGB, such that, even in the presence of chromatic aberrations, relative auto-focus scores for each color may be used to determine the direction of focus.
Abstract:
Various techniques are provided herein for the demosaicing of images acquired and processed by an imaging system. The imaging system includes an image signal processor 32 and image sensors 30 utilizing color filter arrays (CFA) for acquiring red, green, and blue color data using one pixel array. In one embodiment, the CFA may include a Bayer pattern. During image signal processing, demosaicing may be applied to interpolate missing color samples from the raw image pattern. In one embodiment, interpolation for the green color channel may include employing edge-adaptive filters with weighted gradients of horizontal and vertical filtered values. The red and blue color channels may be interpolated using color difference samples with co-located interpolated values of the green color channel. In another embodiment, interpolation of the red and blue color channels may be performed using color ratios (e.g., versus color difference data).
Abstract:
Lossless image compression using differential transfers may involve an image compression unit receiving image data for an image in a sequence of images and transmitting the image data such that image data for at least some image tiles is transmitted using lossy compression due to resource limitations. The image compression unit may then receive image data for a subsequent image in the sequence and determine that the image data for at least some tiles does not change relative to the image data for corresponding tiles of the previous image. The image compression unit may then transmit image data in a manner sufficient to create lossless versions of tiles for which lossily compressed image data was sent previously.
Abstract:
Certain aspects of this disclosure relate to an image signal processing system 32 that includes a flash controller 550 that is configured to activate a flash device prior to the start of a target image frame by using a sensor timing signal. In one embodiment, the flash controller 550 receives a delayed sensor timing signal and determines a flash activation start time by using the delayed sensor timing signal to identify a time corresponding to the end of the previous frame, increasing that time by a vertical blanking time, and then subtracting a first offset to compensate for delay between the sensor timing signal and the delayed sensor timing signal. Then, the flash controller 550 subtracts a second offset to determine the flash activation time, thus ensuring that the flash is activated prior to receiving the first pixel of the target frame.
Abstract:
Various techniques are provided for the detection and correction of defective pixels in an image sensor. In accordance with one embodiment, a static defect table storing the locations of known static defects is provided, and the location of a current pixel is compared to the static defect table. If the location of the current pixel is found in the static defect table, the current pixel is identified as a static defect and is corrected using the value of the previous pixel of the same color. If the current pixel is not identified as a static defect, a dynamic defect detection process includes comparing pixel-to-pixel gradients between the current pixel a set of neighboring pixels against a dynamic defect threshold. If a dynamic defect is detected, a replacement value for correcting the dynamic defect may be determined by interpolating the value of two neighboring pixels on opposite sides of the current pixel in a direction exhibiting the smallest gradient.
Abstract:
The present disclosure provides techniques for performing audio-video synchronization using an image signal processing system. In one embodiment, a time code register provides a current time stamp when sampled. The value of the time code register may be incremented at regular intervals based on a clock of the image signal processing system. At the start of a current frame acquired by an image sensor, the time code register is sampled, and a timestamp is stored into a timestamp register associated with the image sensor. The timestamp is then read from the timestamp register and written to a set of metadata associated with the current frame. The timestamp stored in the frame metadata may then be used to synchronize the current frame with a corresponding set of audio data.
Abstract:
Various techniques for temporally filtering raw image data acquired by an image sensor are provided. In one embodiment, a temporal filter determines a spatial location of a current pixel and identifies at least one collocated reference pixel from a previous frame. A motion delta value is determined based at least partially upon the current pixel and its collocated reference pixel. Next, an index is determined based upon the motion delta value and a motion history value corresponding to the spatial location of the current pixel, but from the previous frame. Using the index, a first filtering coefficient may be selected from a motion table. After selecting the first filtering coefficient, an attenuation factor may be selected from a luma table based upon the value of the current pixel, and a second filtering coefficient may subsequently be determined based upon the selected attenuation factor and the first filtering coefficient. The temporally filtered output value corresponding to the current pixel may then be calculated based upon the second filtering coefficient, the current pixel, and the collocated reference pixel.
Abstract:
Lossless image compression using differential transfers may involve an image compression unit receiving image data for an image in a sequence of images and transmitting the image data such that image data for at least some image tiles is transmitted using lossy compression due to resource limitations. The image compression unit may then receive image data for a subsequent image in the sequence and determine that the image data for at least some tiles does not change relative to the image data for corresponding tiles of the previous image. The image compression unit may then transmit image data in a manner sufficient to create lossless versions of tiles for which lossily compressed image data was sent previously.
Abstract:
Disclosed embodiments provide for a an image signal processing system 32 that includes back-end pixel processing unit 120 that receives pixel data after being processed by at least one of a front-end pixel processing unit 80 and a pixel processing pipeline 82. In certain embodiments, the back-end processing unit 120 receives luma/chroma image data and may be configured to apply face detection operations, local tone mapping, bright, contrast, color adjustments, as well as scaling. Further, the back-end processing unit 120 may also include a back-end statistics unit 2208 that may collect frequency statistics. The frequency statistics may be provided to an encoder 118 and may be used to determine quantization parameters that are to be applied to an image frame.