-
公开(公告)号:US20190213378A1
公开(公告)日:2019-07-11
申请号:US16098299
申请日:2016-09-28
Applicant: ZKTECO CO. LTD
Inventor: Zhinong LI , Chongliang ZHONG , Limu YANG
CPC classification number: G06K9/0004 , G06F3/167 , G06K9/00 , G06K9/00201
Abstract: Embodiments of the present invention disclose a non-contact 3D fingerprint capturing apparatus and method. The apparatus includes: a housing, a circuit board and a fingerprint reader that are disposed in the housing; the circuit board includes a first control module; the fingerprint reader includes a fingerprint capturing module and a positioning module; the positioning module casts light to a first position point on a finger object; the fingerprint capturing module receives light reflected from the first position point, converts an optical signal into an electrical signal, and sends the electrical signal to the first control module; the first control module judges, according to the electrical signal, whether the first position point is a standard point, the standard point being an aperture with a diameter less than a first threshold and an illumination intensity greater than a second threshold; if the first position point is a standard point, the fingerprint capturing module captures fingerprint images from multiple directions, and transmits the fingerprint images to the first control module; and the first control module creates a 3D fingerprint image according to the fingerprint images. The embodiments of the present invention further provide a non-contact 3D fingerprint capturing method.
-
公开(公告)号:US20190206126A1
公开(公告)日:2019-07-04
申请号:US16232313
申请日:2018-12-26
Applicant: Tangible Play, Inc.
Inventor: Mark Solomon , Ariel Zekelman , Jon Dukerschein , Jerome Scholler , Vivek Vidyasagaran
CPC classification number: G06T17/10 , G06K9/00201 , G06K2209/40 , G06T7/521 , G06T7/586
Abstract: A tangible object virtualization station including a base capable of stably resting on a surface and a head component unit connected to the base. The head component unit extends upwardly from the base. At an end of the head component opposite the base, the head component comprises a camera situated to capture a downward view of the surface proximate the base, a lighting array that directs light downward toward the surface proximate the base. The tangible object virtualization station further comprises a display interface included in the base. The display interface is configured to hold a display device in an upright position and connect the display device to the camera and the lighting array.
-
公开(公告)号:US20190184288A1
公开(公告)日:2019-06-20
申请号:US16258739
申请日:2019-01-28
Applicant: LEGO A/S
Inventor: Marko VELIC , Karsten Østergaard NOE , Jesper MOSEGAARD , Brian Bunch CHRISTENSEN , Jens RIMESTAD
CPC classification number: A63F13/65 , A63F13/213 , A63H33/08 , G06K9/00201 , G06K9/6202 , G06K9/6255 , G06K9/627 , G06N3/04 , G06N3/08 , G06N5/04
Abstract: System and method for automatic computer aided optical recognition of toys, for example, construction toy elements, recognition of those elements on digital images and associating the elements with existing information is presented. The method and system may recognize toy elements of various sizes invariant of toy element distance from the image acquiring device for example camera, invariant of rotation of the toy element, invariant of angle of the camera, invariant of background, invariant of illumination and without the need of predefined region where a toy element should be placed. The system and method may detect more than one toy element on the image and identify them.
-
公开(公告)号:US20190122425A1
公开(公告)日:2019-04-25
申请号:US16216476
申请日:2018-12-11
Applicant: Lowe's Companies, Inc.
Inventor: Mason E. Sheffield
IPC: G06T17/00 , H04N5/235 , H04N13/254 , G01B11/245 , G06T7/33 , G06T7/521 , G06K9/00 , G06T15/04 , G06T19/20 , G06T7/586 , H04N5/232 , H04N5/225 , G06T7/529
CPC classification number: G06T17/00 , G01B11/245 , G01B11/25 , G01B2210/52 , G01B2210/54 , G06K9/00201 , G06K9/2027 , G06K9/4609 , G06K9/6202 , G06K9/6247 , G06T7/33 , G06T7/521 , G06T7/529 , G06T7/586 , G06T7/60 , G06T7/73 , G06T15/04 , G06T19/20 , G06T2200/08 , G06T2207/10152 , G06T2207/20212 , G06T2207/30204 , H04N5/2256 , H04N5/23206 , H04N5/23222 , H04N5/23296 , H04N5/2354 , H04N5/247 , H04N13/254
Abstract: Described herein are systems for generating 3D models using imaging data obtained using an array of light projectors, at least one object boundary detector, and a robotic member with an end effector. A first point cloud of data for an object may be generated based on boundary information obtained by the object boundary detector(s). Dimensions for the object may be determined based on the first point cloud of data. A second point cloud of data may be generated based on the dimensions for the object and a configuration of light projectors where the second point cloud corresponds to potential coordinates for a location where the robotic member and end effector can be positioned along a path around the object to capture the image data of the object. A path may be generated to avoid collision between the object and the robotic member or end effector while optimizing the number of capture location points within the second point cloud of data.
-
公开(公告)号:US20190056751A1
公开(公告)日:2019-02-21
申请号:US16047598
申请日:2018-07-27
Applicant: Nuro, Inc.
Inventor: David Ferguson , Jiajun Zhu , Nan Ransohoff , Pichayut Jirapinyo
CPC classification number: G06Q10/0837 , A23L2/52 , A23L5/00 , A23L7/109 , A23V2002/00 , A47J37/0658 , B60H1/00364 , B60H1/00735 , B60P1/36 , B60P3/007 , B60P3/0257 , B60R19/18 , B60R19/483 , B60R21/34 , B60R21/36 , B60R25/25 , B60R25/252 , B60R2021/346 , B65G67/24 , G01C21/20 , G01C21/343 , G01C21/3438 , G01C21/3453 , G05D1/0027 , G05D1/0033 , G05D1/0038 , G05D1/0061 , G05D1/0088 , G05D1/0094 , G05D1/0212 , G05D1/0214 , G05D1/0223 , G05D1/0231 , G05D1/0276 , G05D1/0291 , G05D1/12 , G05D2201/0207 , G05D2201/0213 , G06F3/0484 , G06F16/955 , G06K7/10297 , G06K7/10722 , G06K7/1413 , G06K9/00201 , G06K9/00791 , G06K19/06028 , G06K19/0723 , G06N20/00 , G06Q10/0631 , G06Q10/06315 , G06Q10/0635 , G06Q10/083 , G06Q10/0832 , G06Q10/0833 , G06Q10/0834 , G06Q10/0835 , G06Q10/08355 , G06Q20/00 , G06Q20/127 , G06Q20/18 , G06Q30/0266 , G06Q30/0631 , G06Q30/0645 , G06Q50/12 , G06Q50/28 , G06Q50/30 , G07F17/0057 , G07F17/12 , G08G1/04 , G08G1/202 , G08G1/22 , H04L67/12 , H04N5/76 , H04W4/024 , H04W4/40 , H05B6/688
Abstract: An autonomous robot vehicle in accordance with aspects of the present disclosure includes a conveyance system, a navigation system, a communication system configured to communicate with a food delivery management system, one or more storage modules including a storage compartment or a storage sub-compartment configured to store food items, one or more preparation modules including a preparation compartment or a preparation sub-compartment configured to prepare the food items, processor(s), and a memory storing instructions. The instructions, when executed by the processor(s), cause the autonomous robot vehicle to, autonomously, receive via the communication system a food order for a destination, determine a travel route that includes the destination, control the conveyance system to travel the travel route to reach the destination, and prepare the food item while traveling on the travel route.
-
公开(公告)号:US20190049239A1
公开(公告)日:2019-02-14
申请号:US15854866
申请日:2017-12-27
Applicant: Intel IP Corporation
Inventor: Koba NATROSHVILI , Kay-Ulrich SCHOLL
CPC classification number: G01B11/2513 , G01N15/10 , G01N2015/1075 , G06K9/00201 , G06K9/00805 , G06K9/00838
Abstract: An occupancy grid object determining device is provided, which may include a grid generator configured to generate an occupancy grid of a predetermined region, the occupancy grid including a plurality of grid cells and at least some of the grid cells having been assigned an information about the occupancy of the region represented by the respective grid cell, a determiner configured to determine at least one object in the occupancy grid wherein the at least one object includes a plurality of grid cells, and a remover configured to remove occupancy information from at least one grid cell of the plurality of grid cells of the determined object.
-
公开(公告)号:US20190011989A1
公开(公告)日:2019-01-10
申请号:US15286152
申请日:2016-10-05
Applicant: Google Inc.
Inventor: Carsten C. Schwesig , Ivan Poupyrev
CPC classification number: A63F13/21 , A63F13/24 , A63F2300/8082 , G01S7/41 , G01S7/415 , G01S13/56 , G01S13/66 , G01S13/86 , G01S13/865 , G01S13/867 , G01S13/90 , G01S13/931 , G01S19/42 , G06F1/163 , G06F3/011 , G06F3/017 , G06F3/0346 , G06F3/0484 , G06F3/165 , G06F16/245 , G06F21/6245 , G06F2203/0384 , G06K9/00201 , G06K9/6288 , G06K9/629 , G06T7/75 , G08C17/02 , G08C2201/93 , H04Q9/00 , H04Q2209/883
Abstract: A gesture component with a gesture library is described. The gesture component is configured to expose operations for execution by application of a computing device based on detected gestures. In one example, an input is detected using a three dimensional object detection system of a gesture component of the computing device. A gesture is recognized by the gesture component based on the detected input through comparison with a library of gestures maintained by the gesture component. An operation is then recognized that corresponds to the gesture by the gesture component using the library of gestures. The operation is exposed by the gesture component via an application programming interface to at least one application executed by the computing device to control performance of the operation by the at least one application.
-
公开(公告)号:US20190007673A1
公开(公告)日:2019-01-03
申请号:US16009115
申请日:2018-06-14
Applicant: TRX SYSTEMS, INC.
Inventor: John George Karvounis
IPC: H04N13/239 , H04N5/222 , G06T7/246 , G06T7/277 , G06K9/00 , G06K9/62 , G06T7/80 , H04N13/122 , G06K9/46 , H04N13/246 , H04N13/00 , H04N13/282
CPC classification number: H04N13/239 , G06K9/00201 , G06K9/00664 , G06K9/4671 , G06K9/6248 , G06K2209/29 , G06T7/246 , G06T7/277 , G06T7/85 , G06T2200/28 , G06T2207/10021 , G06T2207/20021 , G06T2207/20088 , H04N5/2224 , H04N13/122 , H04N13/246 , H04N13/282 , H04N2013/0092
Abstract: LK-SURF, Robust Kalman Filter, HAR-SLAM, and Landmark Promotion SLAM methods are disclosed. LK-SURF is an image processing technique that combines Lucas-Kanade feature tracking with Speeded-Up Robust Features to perform spatial and temporal tracking using stereo images to produce 3D features can be tracked and identified. The Robust Kalman Filter is an extension of the Kalman Filter algorithm that improves the ability to remove erroneous observations using Principal Component Analysis and the X84 outlier rejection rule. Hierarchical Active Ripple SLAM is a new SLAM architecture that breaks the traditional state space of SLAM into a chain of smaller state spaces, allowing multiple tracked objects, multiple sensors, and multiple updates to occur in linear time with linear storage with respect to the number of tracked objects, landmarks, and estimated object locations. In Landmark Promotion SLAM, only reliable mapped landmarks are promoted through various layers of SLAM to generate larger maps.
-
公开(公告)号:US20180373949A1
公开(公告)日:2018-12-27
申请号:US16015520
申请日:2018-06-22
Applicant: LENOVO (BEIJING) CO., LTD.
Inventor: Yuanyuan DENG
CPC classification number: G06K9/2081 , G06F3/04845 , G06F3/167 , G06K7/1417 , G06K9/00201 , G06K9/00671 , G06K9/325 , G06K2209/01 , H04N5/23218 , H04N5/23293 , H04N5/232939
Abstract: An identification method and an electronic device are provided. The identification method comprises: detecting at least one object using the electronic device; providing an identification box having a first appearance which corresponds with the at least one object as detected; and displaying the identification box having the first appearance via the electronic device.
-
公开(公告)号:US20180348374A1
公开(公告)日:2018-12-06
申请号:US15609256
申请日:2017-05-31
Applicant: Uber Technologies, Inc.
Inventor: Ankit Laddha , James Andrew Bagnall , Varun Ramakrishna , Yimu Wang , Carlos Vallespi-Gonzalez
CPC classification number: G01S17/89 , G01S7/4815 , G01S7/4863 , G01S7/4865 , G01S17/42 , G01S17/50 , G01S17/936 , G06K9/00201 , G06K9/00805
Abstract: Systems and methods for detecting and classifying objects that are proximate to an autonomous vehicle can include receiving, by one or more computing devices, LIDAR data from one or more LIDAR sensors configured to transmit ranging signals relative to an autonomous vehicle, generating, by the one or more computing devices, a data matrix comprising a plurality of data channels based at least in part on the LIDAR data, and inputting the data matrix to a machine-learned model. A class prediction for each of one or more different portions of the data matrix and/or a properties estimation associated with each class prediction generated for the data matrix can be received as an output of the machine-learned model. One or more object segments can be generated based at least in part on the class predictions and properties estimations. The one or more object segments can be provided to an object classification and tracking application.
-
-
-
-
-
-
-
-
-