Abstract:
In some examples, an article includes a substrate that having a physical surface; a multi-dimensional machine-readable code embodied on the physical surface, wherein the multi-dimensional machine-readable optical code comprises a static data (SD) optical element set and a dynamic lookup data (DLD) optical element set, each set embodied on the physical surface, wherein the DLD optical element set encodes a look-up value that references dynamically changeable data, wherein the SD optical element set encodes static data that does not reference other data, wherein the DLD optical element set is not decodable at the distance greater than the threshold distance.
Abstract:
In some examples, an article includes a substrate and a plurality of optical element sets embodied on the substrate, wherein each optical element set includes a plurality of optical elements, wherein each respective optical element represents an encoded value in a set of encoded values, wherein the set of encoded values are differentiable based on visual differentiability of the respective optical elements, wherein each respective optical element set represents at least a portion of a message or error correction data to decode the message if one or more of the plurality of optical element sets are visually occluded, and wherein the optical element sets for the message and error correction data are spatially configured at the physical surface in a matrix such that the message is decodable from the substrate without optical elements positioned within at least one complete edge of the matrix that is visually occluded.
Abstract:
In some examples, an article includes a substrate, the substrate including a physical surface; a hierarchy of parent and child optical element sets embodied on the physical surface, wherein a first encoded value represented by the parent optical element set is based at least in part on a visual appearance of a particular optical element in the child optical element set, and a second encoded value represented by the particular optical element is based at least in part on the visual appearance, the first and second encoded values being different, and the second encoded value not being decodable from a distance greater than a threshold distance, the first encoded value being decodable from the distance greater than the threshold distance.
Abstract:
Methods for recognizing or identifying tooth types using digital 3D models of teeth. The methods include receiving a segmented digital 3D model of teeth and selecting a digital 3D model of a tooth from the segmented digital 3D model. An aggregation of the plurality of distinct features of the tooth is computed to generate a single feature describing the digital 3D model of the tooth. A type of the tooth is identified based upon the aggregation, which can include comparing the aggregation with features corresponding with known tooth types. The methods also include identifying a type of tooth, without segmenting it from an arch, based upon tooth widths and a location of the tooth within the arch.
Abstract:
Aligner with an integrated data capture device located within or attached to the aligner. The data capture device includes a microcontroller, temperature sensor, and battery. The microcontroller is configured to receive a temperature reading from the temperature sensor both when the aligner is worn and not worn by the user and wirelessly transmit the temperature reading to a user's mobile device. The data capture device can also include other sensors for sensing a condition in the oral environment or mechanical strain on the aligner when it is worn by the user. A software application on the user's mobile device can use the received data to monitor the user's compliance and treatment progress as well as how the aligner is tracking the intended movement of the user's teeth according to the treatment plan.
Abstract:
Methods for generating multiple orthodontic treatment options for a digital 3D model of teeth in malocclusion. The method generates a plurality of different orthodontic treatment plans for the teeth and displays in a user interface the digital 3D model of teeth in malocclusion with a visual indication of each of the plurality of different orthodontic treatment plans. The visual indication of the treatment plans can be overlaid on the digital 3D model of teeth in malocclusion and possibly include aligners, brackets, or a combination of aligners and brackets. A doctor, technician, or other user can then select one of the treatment plans for a particular patient.
Abstract:
A method for evaluating intermediate and final setups for orthodontic treatment. The method includes receiving intermediate and final setups, where each setup is a digital representation of a state of teeth at a particular stage of orthodontic treatment. Scores are computed based upon metrics related to at least some of the states represented by the corresponding setups, and the scores provide an indication of a validity of the corresponding states. The metrics along with an indication of the corresponding scores are displayed in a dashboard in order to provide a visual evaluation of the validity of the setups.
Abstract:
A method for generating digital setups for an orthodontic treatment path. The method includes receiving a digital 3D model of teeth, performing interproximal reduction (IPR) on the model and, after performing the IPR, generating an initial treatment path with stages including an initial setup, a final setup, and a plurality of intermediate setups. The method also includes computing IPR accessibility for each tooth at each stage of the initial treatment path, applying IPR throughout the initial treatment path based upon the computed IPR accessibility, and dividing the initial treatment path into steps of feasible motion of the teeth resulting in a final treatment path with setups corresponding with the steps. The setups can be used to make orthodontic appliances, such as clear tray aligners, for each stage of the treatment path.
Abstract:
Systems and methods for displaying synchronized views and animations of digital 3D models of a person's intra-oral structure such as teeth. Two digital 3D models obtained from scans at different times are displayed in side-by-side views and synchronized via registration of the two scans or corresponding models. A user's control input to one displayed model causes the same manipulation of both models since they are registered. The two digital 3D models can also be displayed in an animation mode where the first model slowly morphs into the second model to illustrate changes in the intra-oral structure over time.
Abstract:
In one example, a method includes receiving a digital note of a plurality of digital notes generated based on image data comprising a visual representation of a scene that includes a plurality of physical notes such that each of the plurality of digital notes respectively corresponds to a particular physical note of the plurality of physical notes, wherein each of the physical notes includes respective recognizable content. In this example, the method also includes receiving user input indicating a modification to one or more visual characteristics of the digital note. In this example, the method also includes editing, in response to the user input, the one or more visual characteristics of the digital note. In this example, the method also includes outputting, for display, a modified version of the digital note that includes the one or more visual characteristics.