-
公开(公告)号:US20190171858A1
公开(公告)日:2019-06-06
申请号:US16206527
申请日:2018-11-30
Applicant: InvenSense, Inc.
Inventor: Abbas ATAYA , Bruno FLAMENT
IPC: G06K9/00
Abstract: In a method for darkfield tracking at a sensor, it is determined whether an object is interacting with the sensor. Provided an object is not interacting with the sensor, a determination that a darkfield candidate image can be captured at the sensor is made. It is determined whether to capture a darkfield candidate image at the sensor based at least in part on the determination that a darkfield candidate image can be captured at the sensor. Responsive to making a determination to capture the darkfield candidate image, the darkfield candidate image is captured at the sensor, wherein the darkfield candidate image is an image absent an object interacting with the sensor. A darkfield estimate is updated with the darkfield candidate image.
-
2.
公开(公告)号:US20240346381A1
公开(公告)日:2024-10-17
申请号:US18634521
申请日:2024-04-12
Applicant: InvenSense, Inc.
Inventor: Juan S. Mejia SANTAMARIA , Abbas ATAYA , Rémi Louis Clément PONÇOT
IPC: G06N20/00
CPC classification number: G06N20/00
Abstract: Disclosed embodiments provide data augmentation techniques in which collected sensor data (for example, data from a motion sensor or a microphone) related to a gesture or an activity is used to simulate a unified data representation by using one or more transfer functions. The collected sensor data is for a particular condition. The unified representation is agnostic to the condition in which the gesture or activity is made. The unified representation is used to train a machine learning model (MLM). The MLM is then deployed on an integrated circuit chip of an embedded device. Live sensor data received by the embedded device is then transformed and input to the MLM, and the MLM then performs a prediction by, for example, recognizing a gesture made by the user of the embedded device.
-
公开(公告)号:US20190188442A1
公开(公告)日:2019-06-20
申请号:US16270516
申请日:2019-02-07
Applicant: InvenSense, Inc.
Inventor: Bruno FLAMENT , Daniela HALL , Etienne DeForas , Harihar NARASIMHA-IYER , Romain FAYOLLE , Jonathan BAUDOT , Abbas ATAYA , Sina AKHBARI
CPC classification number: G06K9/0002 , G06K9/00087 , G06T5/002 , G06T7/0002 , G06T2207/20221 , G06T2207/30168 , G06T2207/30196
Abstract: In a method for correcting a fingerprint image, it is determined whether an object is interacting with the fingerprint sensor. Provided an object is not interacting with the fingerprint sensor, it is determined whether to capture a darkfield candidate image at the fingerprint sensor, wherein the darkfield candidate image is an image absent an object interacting with the fingerprint sensor. Responsive to making a determination to capture the darkfield candidate image, the darkfield candidate image is captured at the fingerprint sensor. Provided an object is interacting with the fingerprint sensor, it is determined whether to model a darkfield candidate image at the fingerprint sensor. Responsive to making a determination to model the darkfield candidate image, the darkfield candidate image is modeled at the fingerprint sensor. A darkfield estimate is updated with the darkfield candidate image. A fingerprint image is captured at the fingerprint sensor. The fingerprint image is corrected using the darkfield estimate.
-
4.
公开(公告)号:US20240346380A1
公开(公告)日:2024-10-17
申请号:US18634510
申请日:2024-04-12
Applicant: InvenSense, Inc.
Inventor: Juan S. Mejia SANTAMARIA , Abbas ATAYA , Rémi Louis Clément PONÇOT
IPC: G06N20/00
CPC classification number: G06N20/00
Abstract: Disclosed embodiments provide data augmentation techniques in which collected sensor data and simulated sensor data created by transforming collected sensor data are used to train a machine learning model (MLM), the MLM is then deployed on an integrated circuit chip of an embedded device, live sensor data received by the embedded device is then either transformed and input to the MLM or input to the MLM without transformation, and the MLM then performs a prediction by, for example, recognizing a gesture made by the user of the embedded device.
-
-
-