Abstract:
A 3D imaging system is proposed in which an object is successively illuminated in at least three directions and at least three images of the object are captured by one or more energy sensors. A set of images is produced computationally showing the object from multiple viewpoints, and illuminated in the at least three directions simultaneously. This set of images is used stereoscopically to form an initial 3D model of the object. Variations in the brightness of the object provides features useful in the stereoscopy. The initial model is refined using photometric data obtained from images in which the object is illuminated in the at least three directions successively.
Abstract:
A three-dimensional model of the skin of an animal, is formed by capturing at least one first two- dimensional (2-D) image of a portion of the skin of an animal located in an imaging region of an imaging assembly and illuminated with certain lighting conditions; using the first 2-D image to determine whether the skin of the animal has been correctly scruffed; and if so, form a 3-D image of the skin of the animal using at least one second 2-D image of the skin of the animal captured under different lighting conditions. Preferably the second 2-D image is captured using the same energy sensor which captured the first 2-D image.
Abstract:
An imaging system captures one or more images at a time when a subject is comfortably wearing a pair of glasses with dummy lenses. The subject's face is illuminated by energy sources (e.g. visible light sources), and specular reflections ("glints") from the dummy lenses used to measure the locations the dummy lens(es). This provides information about how the comfortable positions for glasses on the subject's face, which can be used to design and fabricate personalized glasses.
Abstract:
A 3D imaging system is proposed in which an object to be imaged is successively illuminated in at least three directions and at least three images of the object are captured by one or more energy sensors. Corresponding features in different ones of the images are identified, and the positions of the features in the images are used to estimate motion of the object relative to the energy sensors. The estimated motion is used to register the images in a common coordinate system, and thereby correct for the relative motion of the object and imaging system between different times at which the images were captured. The features may be selected to be ones which are likely to be landmarks on the object, rather than on a background behind the object.
Abstract:
Creating a 3D model of an object comprising: illuminating the object a plurality of times at different respective intensities from at least three directions using at least one directional energy source (eg. LED or flash light source); capturing images of the object for each direction at each time; obtaining photometric data from the images; generating the 3D model using the photometric data. An energy intensity ratio may be determined. An ambient light photograph may be subtracted from the collected pictures. An initial three dimensional model may be created (eg. by stereoscopy) and then refined (eg. by determining the normal direction to the objects surface). Brightness values for points on the object may be determined and combined. Brightness and intensity of a range of colours may be acquired. Distance between the light source and points on the object may be compensated by weighting. Dynamic range of the frames may be reduced by a tone mapping algorithm.
Abstract:
A method for estimating a region associated with a deformity on a surface portion of the skin of a mammal, the method comprising capturing a three-dimensional image of the surface portion to produce first data, the first data comprising depth data, and generating second data from the first data. The second data comprises curvature data. The method further comprises combining the first data and the second data to produce third data, and performing region growing on the third data.