Abstract:
A method is disclosed of capturing an image of a wound on a subject for wound assessment. The method includes obtaining an image of a portion of the subject with one or more cameras; displaying the image on a display panel on an imaging device; obtaining a stored condition from a memory; obtaining a present condition; comparing the stored condition and the present condition; displaying a crosshair over the image on the display panel when it is decided that the present condition corresponds to the stored condition on the basis of the comparison; receiving an instruction for capturing; and capturing an image of the wound in response to the received instruction.
Abstract:
A method for compressing a bi-level document image containing text is disclosed. The document image is segmented into symbol images each representing a letter, numeral, etc. in the document. The symbol images are classified into a plurality of classes, each class being associated with a template image and a class index. Classification is done by comparing each symbol to be classified with template of existing classes, using a number of image features including zoning profiles, side profiles, topology statistics, and low-order image moments. These image features are compared using a tolerance based method to determine whether the symbol matches the template. After classification, certain classes that have few symbols classified into them may be merged with other classes. In addition, the template images of the classes are down-sampled, where the final sizes of the template images are dependent on the likelihood of confusion of the template with other templates.
Abstract:
A word segmentation method uses a recursive technique to segment a text line image into word segments. Spacing segments of the line are obtained; an initial word segmentation is performed to classify the spacing segments based on their lengths into candidate character spacing segments and candidate word spacing segments. The initial segmentation result is evaluated to determine whether the candidate character spacing segments still have a bimodal or multi-modal distribution or a large spread in the distribution, or whether the line contains long words and too few words. If the conditions indicate that the initial segmentation is inadequate, another classification step is performed for the candidate character spacing segments to further classify them into new candidate character spacing segments and new candidate word spacing segments. The process is repeated until the word segmentation is deemed adequate based on the evaluation.
Abstract:
A 3D scanning system includes a base stand, two circular arc shaped support tracks, a mounting assembly for mounting the support tracks to the base stand with one or more degrees of rotational freedom, two sensor holders mounted on the respective support track for holding two depth sensors, and a drive mechanism for driving the sensor holders to move along the respective support tracks. The mounting assembly supports relative rotation of the two support tracks and pitch and roll rotations of the support tracks. To perform a 3D scan, a stationary object is placed in front of the two depth sensors. The sensor holders are moved along the respective support tracks to different positions to obtain depth images of the objects from different angles, from which a 3D surface of the object is constructed. Prior to scanning, the two depth sensors are calibrated relative to each other.
Abstract:
To identify emphasized text, bounding boxes are based on clusters resulting from horizontal compression and horizontal morphological dilation. The bounding boxes are processed to determine if any contain words or characters in bold. A bounding box is eliminated based on a comparison of its density and an average density across all bounding boxes. If its density is greater, text elements within the bounding box are evaluated to determine whether the text element is bold.
Abstract:
A wound image capture method that uses self color compensation to improve color consistency of the captured image and reliability of color-based wound detection. The method uses the skin tone of parts of the patient's own body for color calibration and compensation. In a data registration process, multiple parts of a new patient's body are imaged as baseline images and color data of the baseline images are registered in the system as reference color data. During subsequent wound image capture and wound assessment process, the same parts of the patient's body are imaged again as baseline images, and the wound and its surrounding areas are also imaged. Color data of the newly capture baseline images are compared to the registered reference color data and used to perform color compensation for the wound image.
Abstract:
A method for recognizing a binary document image as a table, pure text, or flowchart includes calculating a side profile of the image for each of the four sides, calculating a boundary removal size N corresponding to each side based on widths of lines or strokes closest to that side, and for each side, removing a boundary of size N from the document image, and re-calculating the side profile for each side after the removal. Then, based on a comparison of the side profiles and the re-calculated side profiles, the input document image is recognized as a table if all side profiles change from smooth to non-smooth, as pure text if the side profile changes are small, and as a flowchart if the original side profiles contain multiple sharp changes and wide flat regions and if the side profile changes significantly in the previously wide flat regions.
Abstract:
A wound assessment method which can estimate a moisture level of the wound, and related image capture device. The wound area is imaged at least twice where the wound is illuminated under different illumination light intensities. The first image captured using a relatively low illumination light intensity is analyzed to assess the wound, for example measuring its size, color and texture. The second image captures using a relatively high illumination light intensity (e.g. using a flash) is analyzed to estimate the moisture level of the wound. The moisture level estimation method extracts white connected components from the second image, and estimates the moisture level based on the number, sizes, and centroid distribution of the white connected components. A 3D image of the wound may also be captured, e.g. using a structured-light 3D scanner of the image capture device.
Abstract:
A method, a computer program product, and a system for analyzing exam-taking behavior and improving exam-taking skills are disclosed, the method includes obtaining a student answering sequence and timing to an examination having a series of questions; comparing the student answering sequence and timing with results from a statistic analysis of the examination obtained from a plurality of students; and identifying an abnormality in the student answering sequence and timing according to the comparison.
Abstract:
A wound assessment method which can estimate a moisture level of the wound, and related image capture device. The wound area is imaged at least twice where the wound is illuminated under different illumination light intensities. The first image captured using a relatively low illumination light intensity is analyzed to assess the wound, for example measuring its size, color and texture. The second image captures using a relatively high illumination light intensity (e.g. using a flash) is analyzed to estimate the moisture level of the wound. The moisture level estimation method extracts white connected components from the second image, and estimates the moisture level based on the number, sizes, and centroid distribution of the white connected components. A 3D image of the wound may also be captured, e.g. using a structured-light 3D scanner of the image capture device.