Abstract:
Systems and methods for assessing configuration profiles for a user configurable device. The configuration profile may include sets configuration parameters and an associated configuration parameter values that may be analyzed to determine a set of current states for the user configurable device. The set of current states may be used to identify a candidate state that is related to a candidate configuration profile. The candidate configuration profile may include at least one set of a candidate configuration parameter and an associated candidate configuration parameter value. One or more prompts may be rendered via the customer device to set at least one of the configuration parameters and associated configuration parameter values based on the corresponding candidate configuration parameter values. A response is received via the user interface to the prompt, and an indication of such response may be transmitted to update the identification of the subsequent candidate configuration profiles.
Abstract:
In a method and system for acquiring image data for detection of optical codes located on objects carried by a conveyor system, a conveyor system has an imager that includes a sensor and an optics arrangement disposed with respect to the sensor to direct light from a field of view to the sensor so that the sensor outputs image data. A processor system receives image data. The processor system determines a position of an object in a field of view with respect to the conveyor's direction of travel, based upon the determined position of the object in the field of view, and upon a predetermined distance between the object and the optics, defines a region that bounds the image data at least with respect to the direction of travel, and that encompasses a position the object would have in the image data if the object is at the determined position in the field of view and at the distance from the optics, with respect to the object.
Abstract:
The invention relates to a method and a system (100) for reading coded information (2) from an object (1). The system (100) comprises one or more three-dimensional cameras (20) configured such as to capture three-dimensional images (22) of the object (1) and a processor (30) configured such as to process the captured three-dimensional images (22). The processor (20) is designed to: • identify planes (24) upon which faces (25) of the object (1) lie; • extract two-dimensional images (27) that lie on the identified planes (24); • apply coded information recognition algorithms to at least part of the extracted two-dimensional images (27).
Abstract:
A system and method for measuring volume dimensions of objects may include flying a UAV to measuring points around an object within a defined area. Images of the object may be captured by the UAV at each of the measuring points. The captured images may be communicated by the UAV to a computing device remotely positioned from the UAV. Volume dimensions of the object may be computed based on the captured images. The volume dimensions of the object may be presented. In presenting the volume dimensions, the volume dimensions may be presented to a user via an electronic display.
Abstract:
Systems, methods, and computer-readable storage media are provided for acquiring field device data ( e.g ., imaging data such as barcode readings), extracting patterns from the field device data, and formatting the extracted patterns - all directly from field devices ( e.g., barcode readers) embedded with these capabilities. The information conveyed by the patterns extracted at the devices embedded with these capabilities may be synthesized and shown in a graphical way to end-users, for instance, by exploiting IoT middleware platform services available at end-user side. Accordingly, systems, methods and computer-readable storage media in accordance with embodiments hereof further provide a customized visualization ( e.g ., a widget) aimed to make the formatted patterns available in an easy, intuitive and effective way.
Abstract:
A method of operation for an inverse crawling device, comprises: receiving, via a first transceiver, first object identification data associated with a first object and transmitted from a first automatic data collection (ADC) reader, the first object identification data encapsulated within a first markup language document, wherein the first markup language document includes at least one piece of first metadata related to the first object identification data; receiving, via the first transceiver or via a second transceiver, second object identification data encapsulated within a second markup language document, wherein the second markup language document includes at least one piece of second metadata related to the second objection identification data; creating in the first markup language document a hyperlink to the second markup language document, wherein the hyperlink is created by a markup language document analyzer in response to identifying a relationship between the at least one piece of first metadata and the at least one piece of second metadata; receiving, from a user interface, a query related to the first object; and providing, via the user interface, results that include the first markup language document.
Abstract:
Systems and methods for autofocus includes a light source for generating a light wave and a varifocal lens arranged in front of the light source. The varifocal lens receives the light wave and generates a Fourier transform of a known semi-transparent pattern positioned on the rear focal plane (or input plane) of the varifocal lens therefrom. An image sensor receives the Fourier transform carried by the light wave after being reflected from an object. A focus tunable lens is arranged in front of the image sensor and through which the reflected light wave passes. A processor adjusts a focal length of the varifocal lens to cause the Fourier transform carried by the light wave to form a predefined (expected) pattern detected by the image sensor, and adjusts a control parameter of the focus tunable lens until one or more spatial frequencies of the predefined pattern detected at the image sensor match one or more predefined spatial frequencies.
Abstract:
Systems, methods, and computer-readable storage media are provided reconstructing barcode signals utilizing sequence alignment matrices. A barcode signal is received that is associated with a portion of a barcode symbol and includes a sequence of bar elements and space elements in alternating order. A sequence alignment matrix (SAM) is built such that each row represents an element of an already reconstructed portion of the barcode symbol, each column represents an element of the received barcode signal sequence, and the potential alignments are placed on a plurality of diagonals thereof. A score is assigned to each matrix square that includes an element of the received barcode signal sequence and a diagonal score is calculated for each of the plurality of diagonals by summing the scores for each matrix square respectively comprising each of the plurality of diagonals.
Abstract:
The invention relates to a method and a system for reading coded information from an object. The system comprises one or more three-dimensional cameras configured such as to capture three-dimensional images of the object and a processor configured such as to process the captured three-dimensional images. The processor is designed to: identify planes upon which faces of the object lie; extract two-dimensional images that lie on the identified planes; and apply coded information recognition algorithms to at least part of the extracted two-dimensional images.