Abstract:
Reconstructed surface meshes can be generated based on a plurality of received surface meshes. Each surface mesh can include vertices and faces representing an object. The received surface meshes can be assigned to one of a plurality of groups, and a region of interest of each surface mesh within each group can be aligned. The reconstructed surface meshes can be generated based on the aligned regions of interest for each group.
Abstract:
A non-parametric computer implemented system and method for creating a two dimensional interpretation of a three dimensional biometric representation. The method comprises: obtaining with a camera a three dimensional (3D) representation of a biological feature; determining a region of interest in the 3D representation; selecting an invariant property for the 3D region of interest; identifying a plurality of minutiae in the 3D representation; mapping a nodal mesh to the plurality of minutiae; projecting the nodal mesh of the 3D representation onto a 2D plane; and mapping the plurality of minutiae onto the 2D representation of the nodal mesh. The 2D representation of the plurality of minutiae has a property corresponding to the invariant property in the 3D representation; and the value of the corresponding property in the 2D projection matches the invariant property in the 3D representation.
Abstract:
Methods for recognizing or identifying tooth types using digital 3D models of teeth. The methods include receiving a segmented digital 3D model of teeth and selecting a digital 3D model of a tooth from the segmented digital 3D model. An aggregation of the plurality of distinct features of the tooth is computed to generate a single feature describing the digital 3D model of the tooth. A type of the tooth is identified based upon the aggregation, which can include comparing the aggregation with features corresponding with known tooth types. The methods also include identifying a type of tooth, without segmenting it from an arch, based upon tooth widths and a location of the tooth within the arch.
Abstract:
Systems and methods for authenticating material samples are provided. Characteristic features are measured for a batch of material samples that comprise substantially the same composition and are produced by substantially the same process. The measured characteristic features have respective variability that is analyzed to extract statistical parameters. In some cases, reference ranges are determined based on the extracted statistical parameters for the batch of material samples. The corresponding statistical parameters of a test material sample are compared to the reference ranges to verify whether the test material sample is authentic.
Abstract:
Systems and methods for generating random bits by using physical variations present in material samples are provided. Initial random bit streams are derived from measured material properties for the material samples. In some cases, secondary random bit streams are generated by applying a randomness extraction algorithm to the derived initial random bit streams.
Abstract:
A method for detecting tooth wear using digital 3D models of teeth taken at different times. The digital 3D models of teeth are segmented to identify individual teeth within the digital 3D model. The segmentation includes performing a first segmentation method that over segments at least some of the teeth within the model and a second segmentation method that classifies points within the model as being either on an interior of a tooth or on a boundary between teeth. The results of the first and second segmentation methods are combined to generate segmented digital 3D models. The segmented digital 3D models of teeth are compared to detect tooth wear by determining differences between the segmented models, where the differences relate to the same tooth to detect wear on the tooth over time.
Abstract:
Methods for estimating and predicting tooth wear based upon a single 3D digital model of teeth. The 3D digital model is segmented to identify individual teeth within the model. A digital model of a tooth is selected from the segmented model, and its original shape is predicted. The digital model is compared with the predicted original shape to estimate wear areas. A mapping function based upon values relating to tooth wear can also be applied to the selected digital model to predict wear of the tooth.
Abstract:
Methods for aligning a digital 3D model of teeth represented by a 3D mesh to a desired orientation within a 3D coordinate system. The method includes receiving the 3D mesh in random alignment and changing an orientation of the 3D mesh to align the digital 3D model of teeth with a desired axis in the 3D coordinate system. The methods can also detect a gum line in the digital 3D model to remove the gingiva from the model.
Abstract:
At least some aspects of the present disclosure feature a computing device configured to receive an input image of an environment having a plurality of physical notes. The computing device automatically processes the input image to identify at least some of the plurality of the physical notes in the input image and displays the input image and indications indicative of the identified physical notes on a user interface. The computing device receives a user input indicating a position within the input image via a user interface and, responsive to the user input, recognizes proximate to the position a missed one of the physical notes that was not identified by the computing device when initially processing the input image.
Abstract:
To provide an apparatus, a system, and a program that can easily detect an image region where retroreflected light is recorded without being influenced by a neighboring object. In one embodiment, a measuring apparatus (1) includes an imaging unit (11), a converter (141) that converts first image data captured by the imaging unit using light emission for photography and second image data captured by the imaging unit without using the light emission for photography to luminance values, a differential processor (142) that calculates a difference between a first luminance value based on the first image data and a second luminance value based on the second image data for each pixel and generates an output image visually representing a region where the difference is present based on an obtained differential image, and a display unit (16) that displays the output image.