Abstract:
The present disclosure relates to methods and systems that may reduce pixel noise due to defective sensor elements in optical imaging systems. Namely, a camera may capture a burst of images with an image sensor while adjusting a focus distance setting of an optical element. For example, the image burst may be captured during an autofocus process. The plurality of images may be averaged or otherwise merged to provide a single, aggregate image frame. Such an aggregate image frame may appear blurry. In such a scenario, “hot” pixels, “dead” pixels, or otherwise defective pixels may be more easily recognized and/or corrected. As an example, a defective pixel may be removed from a target image or otherwise corrected by replacing a value of the defective pixel with an average value of neighboring pixels.
Abstract:
Imaging systems can often gather higher quality information about a field of view than the unaided human eye. For example, telescopes may magnify very distant objects, microscopes may magnify very small objects, and high frame-rate cameras may capture fast motion. The present disclosure includes devices and methods that provide real-time vision enhancement without the delay of replaying from storage media. The disclosed devices and methods may include a live view user interface with two or more interactive features or effects that may be controllable in real-time. Specifically, the disclosed devices and methods may include a live view display and image and other information enhancements, which utilize in-line computation and constant control.
Abstract:
A first plurality of images of a scene may be captured. Each image of the first plurality of images may be captured with a different total exposure time (TET). Based at least on the first plurality of images, a TET sequence may be determined for capturing images of the scene. A second plurality of images of the scene may be captured. Images in the second plurality of images may be captured using the TET sequence. Based at least on the second plurality of images, an output image of the scene may be constructed.
Abstract:
A first plurality of images of a scene may be captured. Each image of the first plurality of images may be captured with a different total exposure time (TET). Based at least on the first plurality of images, a TET sequence may be determined for capturing images of the scene. A second plurality of images of the scene may be captured. Images in the second plurality of images may be captured using the TET sequence. Based at least on the second plurality of images, an output image of the scene may be constructed.
Abstract:
A first plurality of images of a scene may be captured. Each image of the first plurality of images may be captured with a different total exposure time (TET). Based at least on the first plurality of images, a TET sequence may be determined for capturing images of the scene. A second plurality of images of the scene may be captured. Images in the second plurality of images may be captured using the TET sequence. Based at least on the second plurality of images, an output image of the scene may be constructed.
Abstract:
The present disclosure provides example methods operable by computing device. An example method can include receiving an image from a camera. The method can also include comparing one or more parameters of the image with one or more control parameters, where the one or more control parameters comprise information indicative of an image from a substantially unobstructed camera. Based on the comparison, the method can also include determining a score between the one or more parameters of the image and the one or more control parameters. The method can also include accumulating, by a computing device, a count of a number of times the determined score image exceeds a first threshold. Based on the count exceeding a second threshold, the method can also include determining that the camera is at least partially obstructed.
Abstract:
The present disclosure provides example methods operable by computing device. An example method can include receiving an image from a camera. The method can also include comparing one or more parameters of the image with one or more control parameters, where the one or more control parameters comprise information indicative of an image from a substantially unobstructed camera. Based on the comparison, the method can also include determining a score between the one or more parameters of the image and the one or more control parameters. The method can also include accumulating, by a computing device, a count of a number of times the determined score image exceeds a first threshold. Based on the count exceeding a second threshold, the method can also include determining that the camera is at least partially obstructed.
Abstract:
An image-stack viewer may switch between images in an image stack based on detected interactions with the images that are displayed in the viewer. In particular, a region-of-interest (ROI) in an image may be determined based on an interaction, and image characteristics of the ROI may be evaluated in two or more images in the image stack where the ROI best represents the evaluated characteristics.
Abstract:
A first plurality of images of a scene may be captured. Each image of the first plurality of images may be captured with a different total exposure time (TET). Based at least on the first plurality of images, a TET sequence may be determined for capturing images of the scene. A second plurality of images of the scene may be captured. Images in the second plurality of images may be captured using the TET sequence. Based at least on the second plurality of images, an output image of the scene may be constructed.
Abstract:
Embodiments described herein may help a computing device, such as a head-mountable device (HMD), to capture and process images in response to a user placing their hands in, and then withdrawing their hands from, a frame formation. For example, an HMD may analyze image data from a point-of-view camera on the HMD, and detect when a wearer holds their hands in front of their face to frame a subject in the wearer's field of view. Further, the HMD may detect when the wearer withdraws their hands from such a frame formation and responsively capture an image. Further, the HMD may determine a selection area that is being framed, within the wearer's field of view, by the frame formation. The HMD may then process the captured image based on the frame formation, such as by cropping, white-balancing, and/or adjusting exposure.