Abstract:
A three-dimensional coordinate computing apparatus includes an image selecting unit and a coordinate computing unit. The image selecting unit selects a first selected image from multiple captured images, and selects a second selected image from multiple subsequent images captured by the camera after the first selected image has been captured. The second selected image is selected based on a distance between a position of capture of the first selected image and a position of capture of each of the multiple subsequent images and the number of corresponding feature points, each of which corresponds to one of feature points extracted from the first selected image and one of feature points extracted from each of the multiple subsequent images. The coordinate computing unit computes three-dimensional coordinates of the multiple corresponding feature points based on two-dimensional coordinates of each corresponding feature point in the first and second selected images.
Abstract:
An information processing device includes, a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute, acquiring images which includes a target object, and the images captured by a plurality of cameras on a time series basis; calculating a plurality of distance from the plurality of each cameras to a target object by using the images; and correcting, in a case where the target object has reached a predetermined x-y plane and a difference in an area of the target object between the images is equal to or less than a predetermined first threshold, the distance that has been calculated to a distance from the cameras to the x-y plane.
Abstract:
An input method that is executed by a computer includes obtaining a first image of an object using an imaging device, detecting a first feature point of the object based on a shape of the object in the first image, calculating a first angle of the object with respect to a plane on which a pressing operation is performed based on the first feature point and information on a first area of the object in the first image, and selecting a first input item from a plurality of input items based on the first angle.
Abstract:
An image processing device includes a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute, acquiring a captured image; extracting first feature data and second feature data of a first user included in the image; calculating a first authentication decision value, which indicates a probability that the first feature data of the first user resembles first feature data of a second user; and authenticating the first user by using the first feature data or both the first feature data and the second feature data according to the first authentication decision value.
Abstract:
A method for inputting a character executed by a computer that inputs a character, the method includes: obtaining a first pressed position at which the pressing operation has been performed and a first key corresponding to the first pressed position; detecting deletion of a character input using the first key; obtaining, when the deletion is detected, a second pressed position at which a next pressing operation has been performed and a second key corresponding to the second pressed position; determining whether or not a distance between the first pressed position and the second key is smaller than or equal to a threshold; and correcting, when the distance is smaller than or equal to the threshold, a range that is recognized as the second key in the region on the basis of the first pressed position.
Abstract:
An input apparatus includes a processor which is configured to: detect a finger region containing a finger and a fingertip position in a first image captured by a first camera; set a template containing the finger region and generate a plurality of sub-templates by dividing the template along a longitudinal direction of the finger region; obtain a region that best matches the template on a second image generated by a second camera placed a prescribed distance away from the first camera and divide the region into search regions equal in number to the sub-templates along the longitudinal direction of the finger region; perform template matching between a sub-template containing the fingertip position and a corresponding search region to find a matching point corresponding to the fingertip position; and compute the fingertip position in real space based on the fingertip position and the matching point.
Abstract:
An image processing device includes, a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute, acquiring an image including a first region of a user; extracting a color feature quantity or an intensity gradient feature quantity from the image; detecting the first region based on the color feature quantity or the intensity gradient feature quantity; and selecting whether the detecting is detecting the first region using either the color feature quantity or the intensity gradient feature quantity, based on first information related to the speed of movement of the first region calculated from a comparison of the first regions in a plurality of images acquired at different times.
Abstract:
A fingertip position estimation apparatus includes a processor that executes a process. The process includes: identifying a first fingertip position of a finger included in an image, based on a color area model that defines a color of a finger; calculating dimensions of an area that is different from a background of the image, within an area of a prescribed size that is in contact with the first fingertip position and that is positioned in a direction in which a fingertip is pointing; and when the dimensions are larger than a prescribed threshold, estimating a second fingertip position that is positioned away from the first fingertip position in the direction in which the fingertip is pointing.
Abstract:
A user detecting apparatus includes: a memory; and a processor that executes a procedure, the procedure including: obtaining a first image and a second image, extracting a user-associated area from the first image according to a given condition, dividing the user-associated area into a plurality of areas, storing a histogram of each of the plurality of areas in the memory, detecting from the second image a corresponding area that corresponds to an area that is one of the plurality of areas and has a first reference histogram according to similarity, and changing a reference histogram used for a third image from the first reference histogram to a second reference histogram.
Abstract:
A gesture recognition device includes a processor; and a memory which stores a plurality of instructions, which when executed by the processor, cause the processor to execute: acquiring, on a basis of an image of an irradiation region irradiated with projector light, the image being picked up by an image pickup device, first color information representative of color information of a hand region when the projector light is not irradiated on the hand region and second color information representative of color information of the hand region when the projector light is irradiated on the hand region; and extracting, from the image picked up by the image pickup device, a portion of the hand region at which the hand region does not overlap with a touch region irradiated with the projector light on a basis of the first color information and extracting a portion of the hand region.