Abstract:
A scanning system for fluorescent imaging includes a sample holder configured to hold a sample therein, the sample holder defining a sample holding region. A scanner head spans the sample holding region and is movable relative to the sample holder. An array of light sources is disposed on an opposing side of the sample holder and is angled relative thereto. Respective controller are operably coupled to the scanner head and the array of light sources, wherein one controller selectively actuates a one or more rows of the array of light sources and another controller controls movement of the scanner head to capture fluorescent light emitted from within the sample holder in response to illumination from the actuated light sources. A filter designed to filter out scattered light from the sample may be interposed between the sample holder and the scanner head.
Abstract:
A system for three dimensional imaging of motile objects includes an image sensor and a sample holder disposed adjacent to the image sensor. A first illumination source is provided and has a first wavelength and positioned relative to the sample holder at a first location to illuminate the sample. A second illumination source is also provided having a second wavelength, different from the first wavelength, and positioned relative to the sample holder at a second location, different from the first location, to illuminate the sample. The first and second illumination sources are configured to simultaneously, or alternatively, sequentially illuminate the sample contained within the sample holder. Three dimensional positions of the motile objects in each frame are obtained based on digitally reconstructed projection images of the mobile objects obtained from the first and second illumination sources. This positional data is connected for each frame to obtain 3D trajectories of motile objects.
Abstract:
A method of forming nanolenses for imaging includes providing an optically transparent substrate having a plurality of particles disposed on one side thereof. The optically transparent substrate is located within a chamber containing therein a reservoir holding a liquid solution. The liquid solution is heated to form a vapor within the chamber, wherein the vapor condenses on the substrate to form nanolenses around the plurality of particles. The particles are then imaged using an imaging device. The imaging device may be located in the same device that contains the reservoir or a separate imaging device.
Abstract:
The concentration of mercury in a sample is measured by a reader secured to a camera-containing mobile electronic device. The reader has holders for sample and control solutions. First and second light sources emitting light at different colors illuminate the sample and control holders. Each holder contains gold nanoparticles, thymine-rich aptamers, and sodium chloride. The light sources illuminate the sample and control holders. An image is captured of the transmitted light through the sample and control holders, wherein the image comprises two control regions of interest and two sample regions of interest. The device calculates the intensity of the two control regions of interest and the two sample regions of interest and generates intensity ratios for the sample and control, respectively, at each color. The device calculates a normalized color ratio based on the intensity ratios and outputs a concentration of mercury based on the normalized color ratio.
Abstract:
Quantitative phase imaging (QPI) is a label-free computational imaging technique that provides optical path length information of objects. Here, a diffractive QPI network architecture is disclosed that can synthesize the quantitative phase image of an object by converting the input phase information of a scene or object(s) into intensity variations at the output plane. A diffractive QPI network is a specialized all-optical device designed to perform a quantitative phase-to-intensity transformation through passive diffractive/reflective surfaces that are spatially engineered using deep learning and image data. Forming a compact, all-optical network that axially extends only ˜200-300λ (λ=illumination wavelength), this framework replaces traditional QPI systems and related digital computational burdens with a set of passive substrate layers. All-optical diffractive QPI networks can potentially enable power-efficient, high frame-rate and compact phase imaging systems that might be useful for various applications, including, e.g., on-chip microscopy and sensing.
Abstract:
A system for the detection and classification of live microorganisms in a sample includes a light source and an incubator holding one or more sample-containing growth plates. A translation stage moves the image sensor and/or the growth plate(s) along one or more dimensions to capture time-lapse holographic images of microorganisms. Image processing software executed by a computing device captures time-lapse holographic images of the microorganisms or clusters of microorganisms on the one or more growth plates. The image processing software is configured to detect candidate microorganism colonies in reconstructed, time-lapse holographic images based on differential image analysis. The image processing software includes one or more trained deep neural networks that process the time-lapsed image(s) of candidate microorganism colonies to detect true microorganism colonies and/or output a species associated with each true microorganism colony.
Abstract:
A deep learning-based system and method is provided that uses a convolutional neural network to rapidly transform in vivo reflectance confocal microscopy (RCM) images of unstained skin into virtually-stained hematoxylin and eosin-like images with microscopic resolution, enabling visualization of epidermis, dermal-epidermal junction, and superficial dermis layers. The network is trained using ex vivo RCM images of excised unstained tissue and microscopic images of the same tissue labeled with acetic acid nuclear contrast staining as the ground truth. The trained neural network can be used to rapidly perform virtual histology of in vivo, label-free RCM images of normal skin structure, basal cell carcinoma and melanocytic nevi with pigmented melanocytes, demonstrating similar histological features of traditional histology from the same excised tissue. The system and method enables more rapid diagnosis of malignant skin neoplasms and reduces invasive skin biopsies.
Abstract:
A computer-free system and method is disclosed that uses an all-optical image reconstruction method to see through random diffusers at the speed of light. Using deep learning, a set of transmissive layers are trained to all-optically reconstruct images of arbitrary objects that are distorted by random phase diffusers. After the training stage, the resulting diffractive layers are fabricated and form a diffractive optical network that is physically positioned between the unknown object and the image plane to all-optically reconstruct the object pattern through an unknown, new phase diffuser. Unlike digital methods, all-optical diffractive reconstructions do not require power except for the illumination light. This diffractive solution to see through diffusive and/or scattering media can be extended to other wavelengths, and can fuel various applications in biomedical imaging, astronomy, atmospheric sciences, oceanography, security, robotics, autonomous vehicles, among many others.
Abstract:
A fluorescence microscopy method includes a trained deep neural network. At least one 2D fluorescence microscopy image of a sample is input to the trained deep neural network, wherein the input image(s) is appended with a digital propagation matrix (DPM) that represents, pixel-by-pixel, an axial distance of a user-defined or automatically generated surface within the sample from a plane of the input image. The trained deep neural network outputs fluorescence output image(s) of the sample that is digitally propagated or refocused to the user-defined surface or automatically generated. The method and system cross-connects different imaging modalities, permitting 3D propagation of wide-field fluorescence image(s) to match confocal microscopy images at different sample planes. The method may be used to output a time sequence of images (e.g., time-lapse video) of a 2D or 3D surface within a sample.
Abstract:
A deep learning-based digital staining method and system are disclosed that enables the creation of digitally/virtually-stained microscopic images from label or stain-free samples based on autofluorescence images acquired using a fluorescent microscope. The system and method have particular applicability for the creation of digitally/virtually-stained whole slide images (WSIs) of unlabeled/unstained tissue samples that are analyzes by a histopathologist. The methods bypass the standard histochemical staining process, saving time and cost. This method is based on deep learning, and uses, in one embodiment, a convolutional neural network trained using a generative adversarial network model to transform fluorescence images of an unlabeled sample into an image that is equivalent to the brightfield image of the chemically stained-version of the same sample. This label-free digital staining method eliminates cumbersome and costly histochemical staining procedures and significantly simplifies tissue preparation in pathology and histology fields.