Abstract:
THERE IS PROVIDED A METHOD FOR USE IN HUMAN AUTHENTICATION, SAID METHOD COMPRISING THE STEPS OF: PROVIDING CAPTURED IMAGE SEQUENCES; PROVIDING POSTURE GAITS BASED ON SAID IMAGE SEQUENCES; STORING SAID POSTURE GAITS IN A DATABASE; SUBTRACTING THE BACKGROUND FROM FOREGROUND OF EACH IMAGE SEQUENCE; WHEREIN SAID SUBTRACTING COMPRISES OBTAINING THE FOREGROUND SILHOUETTE IN EACH IMAGE SEQUENCE; CONSTRUCTING SKELETONS BASED ON SAID FOREGROUND SILHOUETTE IMAGE SEQUENCE; WHEREIN EACH OF SAID SKELETONS COMPRISES AT LEAST ONE CENTROID AND AT LEAST THREE (3) EXTREMIS; DETERMINING THE VERTICAL DISTANCES BETWEEN SAID THREE (3) EXTREMIS AND CENTROID; DETERMINING THE HORIZONTAL DISTANCES BETWEEN SAID THREE (3) EXTREMIS AND CENTROID; TRACKING AND IDENTIFYING THE HUMAN OF INTEREST BASED ON MATCHING BETWEEN IMAGE OF HUMAN OF INTEREST STORED DATABASE AND CONSTRUCTED SKELETONS. MOST ILLUSTRATIVE FIGURE(S): FIG 1
Abstract:
THE PRESENT INVENTION PROVIDES A METHOD OF TRACKING A TARGET OBJECT (1) IN A SCENE, WHERE THE METHOD IS BASED ON THE USE OF THE ADAPTIVE ATTENTION REGIONS OF THE TARGET OBJECT.THE METHOD DOES NOT USE COMPLETE PROFILE OF THE TARGET OBJECT, SAVING COMPUTER POWER.MORE IMPORTANTLY, THE METHOD ALLOWS TRACK THE TARGET OBJECT (10) EVEN WHEN THE TARGET OBJECT HAS HIGH SIMILARITY WITH THE SCENE BACKGROUND AND OTHER OBJECTS IN THE SCENE.THE PRESENT INVENTION ALSO PROVIDES A SYSTEM OF AUTOMATIC TRACKING A TARGET OBJECT IN A SCENE.
Abstract:
WITH THE GROWING MARKET FOR VIDEO SURVEILLANCE IN SECURITY AREA, THERE IS A NEED FOR AN AUTOMATED SYSTEM WHICH PROVIDES A WAY TO TRACK AND DETECT HUMAN INTENTION BASED ON A PARTICULAR HUMAN MOTION. THE PRESENT INVENTION RELATES TO A SYSTEM AND A METHOD FOR IDENTIFYING HUMAN BEHAVIORAL INTENTION BASED ON EFFECTIVE MOTION ANALYSIS WHEREIN, THE SYSTEM OBTAINS A SEQUENCE OF RAW IMAGES TAKEN FROM LIVE SCENE AND PROCESSES THE RAW IMAGES IN AN ACTIVITY ANALYSIS COMPONENT. THE ACTIVITY ANALYSIS COMPONENT IS FURTHER PROVIDED WITH AN ACTIVITY ENROLLMENT COMPONENT AND ACTIVITY DETECTION COMPONENT. THE MOST ILLUSTRATIVE DRAWING:
Abstract:
A METHOD (200) FOR EXTRACTING FOREGROUND OBJECTS OF A CURRENTLY OBSERVED IMAGE IS PROVIDED HEREWITH. THE METHOD COMPRISES SEGMENTING (202) BACKGROUND OBJECTS OF A PREVIOUSLY OBSERVED IMAGE INTO REGIONS OF HOMOGENEOUS BRIGHTNESS AND SETTING INITIAL THRESHOLD VALUES FOR EACH SEGMENTED REGIONS TO INITIALIZE A BACKGROUND IMAGE INFORMATION THAT INCLUDES BACKGROUND IMAGE AND INITIAL THRESHOLD VALUES OF THE CURRENTLY OBSERVED IMAGE, SUBTRACTING (401) THE CURRENTLY OBSERVED IMAGE FROM THE BACKGROUND IMAGE INFORMATION AND THRESHOLDING (402) THE IMAGE DIFFERENCE USING THE INITIAL THRESHOLD VALUES TO EXTRACT FOREGROUND OF THE CURRENTLY OBSERVED IMAGE, AND COMPARING (405A) THE FOREGROUND OF THE CURRENTLY OBSERVED IMAGE AGAINST FOREGROUND OF THE PREVIOUSLY OBSERVED IMAGE TO UPDATE THE INITIAL THRESHOLD VALUES OF THE BACKGROUND IMAGE INFORMATION. (FIG.1)
Abstract:
The present invention provides a method for indentifying loitering event occurred on an object within an area of interest from a video stream. The method comprises detecting one or more the objects entering the area-of-interest; extracting (104) an entering time and properties of each of the objects; storing (104) the entering times ad properties of the objects; computing (110) a time-stamp for each object based on a difference between a current time and the entering time of the corresponding object; and identifying (112) a loitering event when the time-stamp is longer than a predetermined period.
Abstract:
With the growing market for video surveillance in security area, there is a need for an automated system which provides a way to track and detect human intention based on a particular human motion. The present invention relates to a system and a method for identifying human behavioral intention based on effective motion analysis wherein, the system obtains a sequence of raw images taken from live scene and processes the raw images in an activity analysis component. The activity analysis component is further provided with an activity enrollment component and activity detection component.
Abstract:
The present invention relates to a surveillance system having a method for tampering detection and correction. The surveillance system is able to detect tampering of the camera view and thereon, adjust its video analytics configuration parameters to perform video analytics even though the orientation of the camera (10) has been changed as long as a partial of the ROI is within the tampered camera view. The surveillance system comprises of at least one camera (10), a video acquisition module (20), a storage device (30), an image processing module (40), a display module (50) and a post detection module (60).
Abstract:
The present invention relates to a method for estimating a possible route of an incomplete tag access information. The method comprises the steps of receiving all tag access information including tag identification numbers, connected region identification and need tag values; creating a region ontology for each tag access information received; setting the row and column of the region ontology according to the connected region identification; filling up the region ontology with tag identification numbers and need tag values; generating intensity profile data based on historical data; searching for probable route based on the region ontology; estimating the best route based on the probable route found and intensity profile data; and displaying the best route.
Abstract:
A method of identity recognition via human (subject) lip images is proposed. The method includes registration (140) of templates (135) of the lip images for known subjects for later matching with lip images from subjects for identification, by digitally matching (740) with the registered templates (135). The lip portions are divided in four quadrants (410-440) for feature extraction which permits template matching (740) even when only partial Hp images are available for identification. The method includes classifying (220) the lips into categories, for defined characteristic features to be extracted, where the different categories of lips (310-350) contain different prominent features that are unique for representation. The feature extraction from the quadrants may be done in different orientations (610-640). The acquisition of the lip images (110,710) is by available image sensor technologies such as optical imaging, thermal imaging, ultrasonic imaging, passive capacitance and active capacitance imaging.
Abstract:
A method and an automated system for tracking and tagging objects, wherein each object is tracked and tagged as a motion block. The method (100) includes detecting a plan view and a lateral view of the motion blocks in a current frame (102) to identify occlusion of the motion blocks in the current frame (104), extracting color information from motion blocks in the current frame (108) to identify matching color information between motion blocks in the current frame and all motion blocks in previous frames (110) and assigning a tag to the motion blocks in the current frame (112). The automated system includes a first video camera to detect the plan view (200) of the motion blocks in the current frame and a second video camera to detect the lateral view (208) of the motion blocks in a current frame, a processor comprising means of identifying occlusion of the motion blocks in the current frame, means of extracting color information from the motion blocks in the current frame to identify matching color information between the motion blocks in the current frame and all motion blocks in previous frames and means of assigning a tag to the motion blocks in the current frame, and a data storage system.