-
公开(公告)号:US20200340901A1
公开(公告)日:2020-10-29
申请号:US16858444
申请日:2020-04-24
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yichen Wu
Abstract: A label-free bio-aerosol sensing platform and method uses a field-portable and cost-effective device based on holographic microscopy and deep-learning, which screens bio-aerosols at a high throughput level. Two different deep neural networks are utilized to rapidly reconstruct the amplitude and phase images of the captured bio-aerosols, and to output particle information for each bio-aerosol that is imaged. This includes, a classification of the type or species of the particle, particle size, particle shape, particle thickness, or spatial feature(s) of the particle. The platform was validated using the label-free sensing of common bio-aerosol types, e.g., Bermuda grass pollen, oak tree pollen, ragweed pollen, Aspergillus spore, and Alternaria spore and achieved >94% classification accuracy. The label-free bio-aerosol platform, with its mobility and cost-effectiveness, will find several applications in indoor and outdoor air quality monitoring.
-
公开(公告)号:US10795315B2
公开(公告)日:2020-10-06
申请号:US16300546
申请日:2017-05-10
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yichen Wu , Yibo Zhang , Wei Luo
IPC: G06K9/76 , G03H1/04 , H04N9/04 , G03H1/00 , G03H1/22 , G06T3/40 , H04N5/232 , H04N9/68 , H04N9/73
Abstract: A method of generating a color image of a sample includes obtaining a plurality of low resolution holographic images of the sample using a color image sensor, the sample illuminated simultaneously by light from three or more distinct colors, wherein the illuminated sample casts sample holograms on the image sensor and wherein the plurality of low resolution holographic images are obtained by relative x, y, and z directional shifts between sample holograms and the image sensor. Pixel super-resolved holograms of the sample are generated at each of the three or more distinct colors. De-multiplexed holograms are generated from the pixel super-resolved holograms. Phase information is retrieved from the de-multiplexed holograms using a phase retrieval algorithm to obtain complex holograms. The complex hologram for the three or more distinct colors is digitally combined and back-propagated to a sample plane to generate the color image.
-
公开(公告)号:US20190333199A1
公开(公告)日:2019-10-31
申请号:US16395674
申请日:2019-04-26
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Hongda Wang , Harun Gunaydin , Kevin de Haan
Abstract: A microscopy method includes a trained deep neural network that is executed by software using one or more processors of a computing device, the trained deep neural network trained with a training set of images comprising co-registered pairs of high-resolution microscopy images or image patches of a sample and their corresponding low-resolution microscopy images or image patches of the same sample. A microscopy input image of a sample to be imaged is input to the trained deep neural network which rapidly outputs an output image of the sample, the output image having improved one or more of spatial resolution, depth-of-field, signal-to-noise ratio, and/or image contrast.
-
14.
公开(公告)号:US20130157351A1
公开(公告)日:2013-06-20
申请号:US13769043
申请日:2013-02-15
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Hongying Zhu
IPC: G01N21/64
CPC classification number: G01N21/6486 , G01N21/6458 , G01N21/648 , G01N2201/0221 , G02B7/006 , G02B7/02 , G02B13/0025 , G02B23/243 , H04M1/0254 , H04M1/21 , H04M2250/52 , H04N5/2254
Abstract: Wide-field fluorescent imaging on a mobile device having a camera is accomplished with a compact, light-weight and inexpensive optical components that are mechanically secured to the mobile device in a removable housing. Battery powered light-emitting diodes (LEDs) contained in the housing pump the sample of interest from the side using butt-coupling, where the pump light is guided within the sample holder to uniformly excite the specimen. The fluorescent emission from the sample is then imaged using an additional lens that is positioned adjacent to the existing lens of the mobile device. A color filter is sufficient to create the dark-field background required for fluorescent imaging, without the need for expensive thin-film interference filters.
Abstract translation: 具有照相机的移动设备上的宽场荧光成像通过紧凑,重量轻且廉价的光学部件来实现,该组件机械地固定到可拆卸外壳中的移动装置。 包含在壳体中的电池供电的发光二极管(LED)使用对接耦合从侧面泵送感兴趣的样品,其中泵浦光在样品保持器内引导以均匀地激发样品。 然后使用与移动装置的现有透镜相邻定位的附加透镜来成像来自样品的荧光发射。 滤色器足以产生荧光成像所需的暗场背景,而不需要昂贵的薄膜干涉滤光片。
-
15.
公开(公告)号:US20240354907A1
公开(公告)日:2024-10-24
申请号:US18637317
申请日:2024-04-16
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Hanlong Chen , Luzhe Huang
CPC classification number: G06T5/60 , G03H1/0005 , G03H1/26 , G06T5/50 , G06T5/73 , G03H2001/005 , G03H2210/55 , G03H2226/02 , G03H2227/03 , G06T2207/10056 , G06T2207/20021 , G06T2207/20081 , G06T2207/20084 , G06T2207/30024
Abstract: A deep learning framework, termed Fourier Imager Network (FIN) is disclosed that can perform end-to-end phase recovery and image reconstruction from raw holograms of new types of samples, exhibiting success in external generalization. The FIN architecture is based on spatial Fourier transform modules with the deep neural network that process the spatial frequencies of its inputs using learnable filters and a global receptive field. FIN exhibits superior generalization to new types of samples, while also being much faster in its image inference speed, completing the hologram reconstruction task in ˜0.04 s per 1 mm2 of the sample area. Beyond holographic microscopy and quantitative phase imaging applications, FIN and the underlying neural network architecture may open up various new opportunities to design broadly generalizable deep learning models in computational imaging and machine vision fields.
-
16.
公开(公告)号:US12086717B2
公开(公告)日:2024-09-10
申请号:US18316474
申请日:2023-05-12
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Xing Lin , Deniz Mengu , Yi Luo
IPC: G06N3/082 , G02B5/18 , G02B27/42 , G06F18/214 , G06F18/2431 , G06N3/04 , G06N3/08 , G06V10/94
CPC classification number: G06N3/082 , G02B5/1866 , G02B27/4205 , G02B27/4277 , G06F18/214 , G06F18/2431 , G06N3/04 , G06N3/08 , G06V10/95
Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
-
17.
公开(公告)号:US11893779B2
公开(公告)日:2024-02-06
申请号:US17285898
申请日:2019-10-18
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yibo Zhang , Hatice Ceylan Koydemir
CPC classification number: G06V10/82 , G01N15/14 , G01N33/49 , G03H1/0005 , G03H1/08 , G06T7/20 , G06V20/69 , G06V20/698 , G06V30/19173 , G01N2015/1486 , G03H2001/0033 , G03H2210/42 , G03H2210/46 , G06T2207/20081 , G06T2207/30024 , G06T2207/30242
Abstract: Systems and methods for detecting motile objects (e.g., parasites) in a fluid sample by utilizing the locomotion of the parasites as a specific biomarker and endogenous contrast mechanism. The imaging platform includes one or more substantially optically transparent sample holders. The imaging platform has a moveable scanning head containing light sources and corresponding image sensor(s) associated with the light source(s). The light source(s) are directed at a respective sample holder containing a sample and the respective image sensor(s) are positioned below a respective sample holder to capture time-varying holographic speckle patterns of the sample contained in the sample holder. The image sensor(s). A computing device is configured to receive time-varying holographic speckle pattern image sequences obtained by the image sensor(s). The computing device generates a 3D contrast map of motile objects within the sample use deep learning-based classifier software to identify the motile objects.
-
18.
公开(公告)号:US11694082B2
公开(公告)日:2023-07-04
申请号:US17843720
申请日:2022-06-17
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Xing Lin , Deniz Mengu , Yi Luo
IPC: G06N3/08 , G06N3/082 , G02B5/18 , G02B27/42 , G06N3/04 , G06V10/94 , G06F18/214 , G06F18/2431
CPC classification number: G06N3/082 , G02B5/1866 , G02B27/4205 , G02B27/4277 , G06F18/214 , G06F18/2431 , G06N3/04 , G06N3/08 , G06V10/95
Abstract: An all-optical Diffractive Deep Neural Network (D2NN) architecture learns to implement various functions or tasks after deep learning-based design of the passive diffractive or reflective substrate layers that work collectively to perform the desired function or task. This architecture was successfully confirmed experimentally by creating 3D-printed D2NNs that learned to implement handwritten classifications and lens function at the terahertz spectrum. This all-optical deep learning framework can perform, at the speed of light, various complex functions and tasks that computer-based neural networks can implement, and will find applications in all-optical image analysis, feature detection and object classification, also enabling new camera designs and optical components that can learn to perform unique tasks using D2NNs. In alternative embodiments, the all-optical D2NN is used as a front-end in conjunction with a trained, digital neural network back-end.
-
公开(公告)号:US20230162016A1
公开(公告)日:2023-05-25
申请号:US17920778
申请日:2021-05-21
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Deniz Mengu , Yair Rivenson
IPC: G06N3/067
CPC classification number: G06N3/067
Abstract: A diffractive optical neural network includes one more layers that are resilient to misalignments, fabrication-related errors, detector noise, and/or other sources of error. A diffractive optical neural network model is first trained with a computing device to perform a statistical inference task such as image classification (e.g., object classification). The model is trained using images or training optical signals along with random misalignments of the plurality of layers, fabrication-related errors, input plane or output plane misalignments, and/or detector noise, followed by computing an optical output of the diffractive optical neural network model through optical transmission and/or reflection resulting from the diffractive optical neural network and iteratively adjusting complex-valued transmission and/or reflection coefficients for each layer until optimized transmission/reflection coefficients are obtained. Once the model is optimized, the physical embodiment of the diffractive optical neural network is manufactured.
-
公开(公告)号:US20230085827A1
公开(公告)日:2023-03-23
申请号:US17908864
申请日:2021-03-18
Applicant: THE REGENTS OF THE UNIVERSITY OF CALIFORNIA
Inventor: Aydogan Ozcan , Yair Rivenson , Yilin Luo , Luzhe Huang
Abstract: A deep learning-based offline autofocusing method and system is disclosed herein, termed a Deep-R trained neural network, that is trained to rapidly and blindly autofocus a single-shot microscopy image of a sample or specimen that is acquired at an arbitrary out-of-focus plane. The efficacy of Deep-R is illustrated using various tissue sections that were imaged using fluorescence and brightfield microscopy modalities and demonstrate single snapshot autofocusing under different scenarios, such as a uniform axial defocus as well as a sample tilt within the field-of-view. Deep-R is significantly faster when compared with standard online algorithmic autofocusing methods. This deep learning-based blind autofocusing framework opens up new opportunities for rapid microscopic imaging of large sample areas, also reducing the photon dose on the sample.
-
-
-
-
-
-
-
-
-