Abstract:
A configuration system generates a calibration target to be printed, the target including a set of machine-readable and visually-identifiable landmarks and associated location-encoding marks which encode known locations of the landmarks. A plurality of test images of the printed calibration target is acquired by the system from an image capture assembly. Positions of the landmarks in the acquired test images and the location-encoding marks in the acquired test images are detected by the system. The system decodes the locations of the landmarks from the detected location-encoding marks and spatially characterizes the image capture assembly, based on the detected positions of the landmarks in the acquired test images and their decoded known locations.
Abstract:
A store profile generation system includes a mobile base and an image capture assembly mounted on the base. The assembly includes at least one image capture device for acquiring images of product display units in a product facility, product labels being associated with the product display units which include product-related data. A control unit acquires the images captured by the at least one image capture device at a sequence of locations of the mobile base in the product facility. The control unit extracts the product-related data from the acquired images and constructs a store profile indicating locations of the product labels throughout the product facility, based on the extracted product-related data. The store profile can be used for generating new product labels for a sale in an appropriate order for a person to match to the appropriate locations in a single pass through the store.
Abstract:
Methods and systems are disclosed for updating camera geometric calibration utilizing scene analysis. Geometric calibration parameters can be derived with respect to one or more cameras and selected reference points of interest identified from a scene acquired by one or more of such cameras. The camera geometric calibration parameters can be applied to image coordinates of the selected reference points of interest to provide real-world coordinates at a time of initial calibration of the camera(s). A subset of a video stream from the camera(s) can then be analyzed to identify features of a current scene captured by the camera(s) that match the selected reference points of interest and provide a current update of the camera geometric calibration parameters with respect to the current scene.
Abstract:
Methods and systems present, to a user, different versions of sample images. Each of the sample images is classified into at least one image-element category of multiple image-element categories. Such methods and systems request the user to select preferred versions of the sample images from the different versions of the sample images, and receive in response a user selection of preferred images. The methods and systems determine user-specific preferences for each of the image-element categories based on the user selection of the preferred images. The methods and systems receive an image-processing request relating to user images from the user, and classify the user images into the image-element categories. When processing the image-processing request, the methods and systems alter renditions of the user images according to the user-specific preferences for each image-element category.
Abstract:
Provided is a method and system for efficient localization in still images. According to one exemplary method, a sliding window-based 2-D (Dimensional) space search is performed to detect a parked vehicle in a video frame acquired from a fixed parking occupancy video camera including a field of view associated with a parking region.
Abstract:
This disclosure provides vehicle detection methods and systems including irrelevant search window elimination and/or window score degradation. According to one exemplary embodiment, provided is a method of detecting one or more parked vehicles in a video frame, wherein candidate search windows are limited to one or more predefined window shapes. According to another exemplary embodiment, the method includes degrading a classification score of a candidate search window based on aspect ratio, window overlap area and/or a global maximal classification.
Abstract:
A system and method of localizing vascular patterns by receiving frames from a video camera, identifying and tracking an object within the frames, determining temporal features associated with the object; and localizing vascular patterns from the frames based on the temporal features associated with the object.
Abstract:
A method for evaluation of one or more remote workers is disclosed. The method includes publishing a set of tasks. The set of tasks includes a first subset of tasks and a second subset of tasks. The first subset of tasks is generated with a set of defined responses. The method further includes receiving a first subset of responses corresponding to the first subset of tasks from the one or more remote workers. The first subset of responses is then compared with the set of defined responses, and the one or more remote workers are analyzed based on the comparison.
Abstract:
This disclosure provides an augmented Virtual Trainer (VT) method and system. According to an exemplary system, a video based physiological metric system is integrated with a VT system to provide health and/or safety related data associated with a user of the VT system. According to an exemplary embodiment, the disclosed augmented VT system modifies an exercise routine based on the physiological metrics and/or provides audio signals to the user.
Abstract:
A method, system, and apparatus for video frame alignment comprises collecting video data comprising at least two video frames; extracting a line profile along at least one line profile in each of the at least two video frames; selecting one of the at least two video frames as a reference video frame; segmenting each of the at least one line profiles into a plurality of segmented line profile group segments; aligning the plurality of segmented line profiles with the segmented line profiles in the reference video frame; translating each of the at least two video frames for each of the plurality of corresponding segmented line profile alignments; and removing a camera shift from the at least two video frames according to the translation and alignment of the plurality of segmented line profiles with the plurality of segmented line profile in the reference video frame.