Abstract:
An image providing apparatus includes: a plurality of auxiliary imaging units that have shooting ranges partially overlapping one another, and each of which images a subject to generate auxiliary image data; a provider communication unit; a provider communication control unit that receives, from the portable device, via the provider communication unit, transmission request information requesting the image providing apparatus to transmit composite image data; and an image synthesizing unit that generates, based on the transmission request information, composite image data by combining the auxiliary image data respectively generated by two or more of the plurality of auxiliary imaging units. The provider communication control unit transmits, via the provider communication unit, the composite image data to the portable device.
Abstract:
A display apparatus includes: a communication section performing signal transmission with each of a first image pickup section obtaining a first picked-up image and a second image pickup section performing image pickup from an angle different from an angle of an optical axis of the first image pickup section to obtain a second picked-up image of a same object; an instruction inputting section inputting an instruction to select an image to be a display target between the first picked-up image obtained by the first image pickup section and the second picked-up image obtained by the second image pickup section; and a display control section switching display of the first picked-up image and display of the second picked-up image, causing the display of the first picked-up image and the display of the second picked-up image to cooperate with each other, based on the instruction of the instruction inputting section.
Abstract:
An imaging device includes: an imaging unit configured to image an object and generate image data of the object; a contour detector configured to detect a contour of the object in an image corresponding to the image data generated by the imaging unit; a special effect processor configured to generate processed image data that produces a visual effect by performing different image processing for each object area determined by a plurality of contour points that constitutes a contour of the object in accordance with a perspective distribution of the plurality of the contour points that constitutes the contour of the object from the imaging unit, the image processing being performed on an area surrounded by the contour in the image corresponding to the image data generated by the imaging unit.
Abstract:
An image processing device includes: a determination unit that determines, based on a plurality of pieces of image data which is generated by continuously taking images of an area of field of view and input from an imaging unit provided outside the image processing device, whether the area of field of view of the imaging unit has been changed; an image composition unit that superimposes overlapping areas of imaging regions of a plurality of images corresponding to the plurality of pieces of image data along a direction in which the area of field of view of the imaging unit has been changed, to generate composite image data when the area of field of view of the imaging unit has been changed; and a display control unit that causes a display unit provided outside the imaging processing device to display a composite image corresponding to the generated composite image data.
Abstract:
A photographing apparatus includes a photographing module, an image processor, a line-of-sight direction determination module, a main subject determination module, and an emphasis processor. The line-of-sight direction determination module determines a line-of-sight direction of the photographer by comparing with the reference line-of-sight direction of the photographer in the image data acquired by the image processor. The main subject determination module determines a main subject to be photographed by the photographer, based on the line-of-sight direction determined by the line-of-sight direction determination module. The emphasis processor executes an emphasis process on the main subject determined by the main subject determination module.
Abstract:
The information device includes an imaging unit that images a subject and generates image data of the subject, a meta information generating unit that generates meta information related to the image data generated by the imaging unit, a possibility information generating unit that generates, with respect to the meta information, possibility information setting whether or not change of original information is possible by an external device when the meta information is transmitted to the external device, and an image file generating unit that generates an image file associating the image data generated by the imaging unit, the meta information generated by the meta information generating unit, and the possibility information generated by the possibility information generating unit with one another.
Abstract:
A user guide method, comprising determining a reference area according to user behavior and target events the user is interested in, acquiring a reference target event heat map representing distribution of the target events within the reference area for a specified time point, and estimating conditions of a target event at a time when time has passed from the specified time, by referencing the reference target event heat map, and a database that shows chronological change of previous heat maps for the same or similar areas.
Abstract:
An annotation device, comprising: a display that performs sequential playback display of a plurality of images that may contain physical objects that are the subject of annotation, and a processor that acquires specific portions that have been designated within the images displayed on the display as annotation information, sets operation time or data amount for designating the specific portions, and at a point in time where designation of the specific portions has been completed for the operation time, a time based on data amount, or data amount, that have been set, requests learning to an inference engine that creates an inference model by learning, using annotation information that has been acquired up to the time of completion as training data representing a relationship between the physical object and the specific portions.
Abstract:
An image processing apparatus generates a left-eye viewpoint image and a right-eye viewpoint image, detects one or more pairs that are each a pair of a partial image of the left-eye viewpoint image and a partial image of the right-eye viewpoint image that are similar to each other, performs, for each of the one or more pairs, image adjustment processing for adjusting the three-dimensionality of one of or both of the partial images of the left-eye and right-eye viewpoint images, displays the left-eye viewpoint image or the left-eye viewpoint image after the image adjustment processing in a manner such that this image is capable of being observed with a left eye, and displays the right-eye viewpoint image or the right-eye viewpoint image after the image adjustment processing in a manner such that this image is capable of being observed with a right eye.
Abstract:
An image pickup system includes an input/output modeling section 24, the input/output modeling section 24 creating, as a population, an image group obtained when a specific target is photographed, (access image), and generating an inference model by using, as teacher data, sequential images selected from the image group created as the population, based on whether the specific target can be accessed, wherein each image of the image group is associated with date and time information and/or position information, and the input/output modeling section 24 generates an inference model for determining based on the date and time information and/or the position information whether a process to the specific target is good or bad.