Abstract:
A stream of images including an area occupied by at least one object are processed to extract wavelet coefficients, and the extracted coefficients are represented as wavelet signatures that are less susceptible to misclassification due to noise and extraneous object features. Representing the wavelet coefficients as wavelet signatures involves sorting the coefficients by magnitude, setting a coefficient threshold based on the distribution of coefficient magnitudes, truncating coefficients whose magnitude is less than the threshold, and quantizing the remaining coefficients.
Abstract:
The empty vs. non-empty status of a cargo container (10) is detected based on boundary analysis of a wide-angle image obtained by a monocular vision system (14). The wide-angle image is warped (56) to remove distortion created by the vision system optics (18a), and the resulting image is edge-processed (58) to identify the boundaries of the container floor (10e). If package boundaries are detected within the floor space (82), or a large foreground package is blocking the floor boundaries (86), the cargo status is set to non-empty (84). If floor boundaries (24a, 24b) are detected and no package boundaries are detected within the floor space (82, 86), the cargo status is set to empty (88).
Abstract:
A system (20) and method for enhancing the contrast within an image (22). An enhanced image (24) can be generated in a real-time or substantially real-time manner from an initial image (22). The pixel values (38) of the initial image (22) can be used to populate a histogram (40) or otherwise serve as the basis for subsequent processing. A valley (44) can be identified within the range of pixel values (38) for use as a stretch metric (48) used by a stretch heuristic (46) to expand the contrast of the pixel values (38) in the initial image (22) by expanding the range of pixel values (38) associated with the pixels (36) in the histogram (40). In some embodiments, the initial image (22) is first divided into image regions (52) that are each associated with individualized processing. A bilinear interpolation step (56) can then be performed to smooth the integrated image after the individualized processing is used to stretch the pixels (36) within the individual image regions (52).
Abstract:
A method (100) for identifying objects in an electronic image is provided. The method (100) includes the steps of providing an electronic source image (10) and processing the electronic source image to identify edge pixels. The method (100) further includes the steps of providing an electronic representation of the edge pixels (10') and processing the electronic representation of the edge pixels (10') to identify valid edge center pixels. The method (100) still further includes the step of proving an electronic representation of the valid edge center pixels. Each valid edge center pixel represents the approximate center of a horizontal edge segment of a target width. The horizontal edge segment is made up of essentially contiguous edge pixels. The method (100) also includes the steps of determining symmetry values of test regions (46,48,50,95) associated with valid edge center pixels, and classifying the test regions (46,48,50,95) based on factors including symmetry.
Abstract:
An object classification method (100a) for a collision warning system is disclosed. The method includes the steps of capturing (10) a video frame (25) with an imaging device and examining a radar-cued potential object location (50) within the video frame (25), extracting (12) orthogonal moment features from the potential object location (50), extracting (14) Gabor filtered features from the potential object location (50), and classifying (16) the potential object location (50) into one of a first type of image (18a, 18b) or a second type of image (18c, 18d) in view of the extracted orthogonal moment features and the Gabor filtered features.
Abstract:
A method of object classification (10) including the steps of providing an image device (28), providing selecting predetermined imaging features to be extracted from an image produced by the imaging device (28) based upon a desired classification of an object (14), and obtaining an image (16) by the imaging device (28) of at least one object in a field of view of the imaging device (28). The method (10) further includes the steps of extracting at least one feature from the image (18), wherein the at least one feature corresponds to the predetermined imaging features, determining a value for each of the extracted at least one feature (21), and classifying the object based upon the at least one feature that could be extracted from the image (22).
Abstract:
A stream of images including an area occupied by at least one object are processed to extract wavelet coefficients, and the extracted coefficients are represented as wavelet signatures that are less susceptible to misclassification due to noise and extraneous object features. Representing the wavelet coefficients as wavelet signatures involves sorting the coefficients by magnitude, setting a coefficient threshold based on the distribution of coefficient magnitudes, truncating coefficients whose magnitude is less than the threshold, and quantizing the remaining coefficients.