Abstract:
An electronic device and method capture multiple images of a scene of real world at a several zoom levels, the scene of real world containing text of one or more sizes. Then the electronic device and method extract from each of the multiple images, one or more text regions, followed by analyzing an attribute that is relevant to OCR in one or more versions of a first text region as extracted from one or more of the multiple images. When an attribute has a value that meets a limit of optical character recognition (OCR) in a version of the first text region, the version of the first text region is provided as input to OCR.
Abstract:
Certain aspects of the present disclosure relate to a method for compressed sensing (CS). The CS is a signal processing concept wherein significantly fewer sensor measurements than that suggested by Shannon/Nyquist sampling theorem can be used to recover signals with arbitrarily fine resolution. In this disclosure, the CS framework is applied for sensor signal processing in order to support low power robust sensors and reliable communication in Body Area Networks (BANs) for healthcare and fitness applications.
Abstract:
Generally described, aspects of the present disclosure relate to generation of an image representing a panned shot of an object by an image capture device. In one embodiment, a panned shot may be performed on a series of images of a scene. The series of images may include at least subject object moving within the scene. Motion data of the subject object may be captured by comparing the subject object in a second image of the series of images to the subject object in a first image of the series of images. A background image is generated by implementing a blur process using the first image and the second image based on the motion data. A final image is generated by including the image of the subject object in the background image.
Abstract:
Aspects of the present disclosure relate to systems and methods for tuning an image signal processor (ISP). An example device may include one or more processors configured to receive a reference image, determine a plurality of image quality (IQ) metrics based on the reference image, determine a value for each of the plurality of IQ metrics for the reference image, identify one or more existing parameter sets in a parameter database based on the values of the plurality of IQ metrics, and determine whether the parameter database is to be adjusted based on the one or more existing parameter sets.
Abstract:
Aspects of the present disclosure relate to systems and methods for tuning an image signal processor. An example device may include one or more processors. The one or more processor may be configured to receive an input image to be processed, receive a reference image that is a processed image of the input image by a second image signal processor, and determine one or more parameter values to be used by the image signal processor in processing the input image based on one or more differences between the input image and the reference image.
Abstract:
Techniques are described for addressing rolling shutter delay and in some cases rolling shutter delay and stabilization. Processing circuits may receive image content in overlapping portions of images, and may adjust the image content until there is overlap in the overlapping portions. Processing circuits may also receive information of deviation of the device from a common reference. Based on the overlapping image content, the deviation of the device from the common reference, and image content in non-overlapping portions, the processing circuits may determine mapping of coordinates to a rectangular mesh for generating an equirectangular image.
Abstract:
A difference in intensities of a pair of pixels in an image is repeatedly compared to a threshold, with the pair of pixels being separated by at least one pixel (skipped pixel). When the threshold is found to be exceeded, a selected position of a pixel in the pair, and at least one additional position adjacent to the selected position are added to a set of positions. The comparing and adding are performed multiple times to generate multiple such sets, each set identifying a region in the image, e.g. an MSER. Sets of positions, identifying regions whose attributes satisfy a test, are merged to obtain a merged set. Intensities of pixels identified in the merged set are used to generate binary values for the region, followed by classification of the region as text/non-text. Regions classified as text are supplied to an optical character recognition (OCR) system.