Abstract:
Determining occupants' interactions in a space by applying a computer vision algorithm to track an occupant in a set of images of a space to obtain locations in the space of the occupant over time, where a history log of the occupant includes the locations of the occupant in the space over time is created and history logs of a plurality of occupants are compared to extract interaction points between the plurality of occupants.
Abstract:
Automatically managing space related resources by using a processor to detect an occupied work station in at least one image of a sequence of images of a space and outputting a signal based on the detection of the occupied work station.
Abstract:
Calculating ambient light in a space by obtaining a top view image of the space from an array of pixels, and identifying an object in the image and assigning weights to pixels from the array of pixels based on locations of the pixels relative to the identified object in the image, where object may include a reflective surface, where ambient light in the space is calculated based on the weighted pixels.
Abstract:
A method and system for determining occupancy in a space, include determining presence of an occupant in a space based a signal from a PIR sensor monitoring the space and on analysis of an image of the space. Assigning different weights to the PIR signal and image analysis enables controlling a device in the space differently.
Abstract:
Determining occupancy in a space by detecting a suspected object in a first image of a space, creating a bounding shape around the suspected object in the image, the bounding shape being aligned towards the center of the image, tracking a selected feature from within the bonding shape, determining occupancy in the space based on the tracking, and controlling a device based on the occupancy determination.
Abstract:
A system and method for computer vision based tracking of a human form may include detecting a shape of an object in an image of a space and determining the probability of object having a human form shape based on movement of the object. If the probability of the object of being of a human form is above a predetermined threshold the object is tracked and if the probability is below the threshold then the tracking is terminated. Occupancy in the space may be determined based on the tracking of the object.
Abstract:
A computer vision based interactive system includes a device to be controlled by a user based on image analysis, an imager in communication with the device and a feedback indicator configured to create an indicator FOV which correlates with the FOV of the imager for providing indication to the user that the user is within the imager FOV.
Abstract:
A system and method for computer vision based tracking of a human form may include detecting a shape of an object in an image of a space and determining the probability of object having a human form shape based on movement of the object. If the probability of the object of being of a human form is above a predetermined threshold the object is tracked and if the probability is below the threshold then the tracking is terminated. Occupancy in the space may be determined based on the tracking of the object.
Abstract:
A system and method are provided for controlling a device based on computer vision. Embodiments of the system and method of the invention are based on receiving a sequence of images of a field of view; detecting movement of at least one object in the images; applying a shape recognition algorithm on the at least one moving object; confirming that the object is a user hand by combining information from at least two images of the object; and tracking the object to control the device.
Abstract:
A system and method are provided for controlling a device based on computer vision. Embodiments of the system and method of the invention are based on receiving a sequence of images of a field of view; detecting movement of at least one object in the images; applying a shape recognition algorithm on the at least one moving object; confirming that the object is a user hand by combining information from at least two images of the object; and tracking the object to control the device.