Abstract:
Traditionally, time-lapse videos are constructed from images captured at time intervals called "temporal points of interests" or "temporal POIs." Disclosed herein are systems and methods of constructing improved, motion-stabilized time-lapse videos using temporal points of interest and image similarity comparisons. According to some embodiments, a "burst" of images may be captured, centered around the aforementioned temporal points of interest. Then, each burst sequence of images may be analyzed, e.g., by performing an image similarity comparison between each image in the burst sequence and the image selected at the previous temporal point of interest. Selecting the image from a given burst that is most similar to the previous selected image, while minimizing the amount of motion with the previous selected image, allows the system to improve the quality of the resultant time-lapse video by discarding "outlier" or other undesirable images captured in the burst sequence and motion stabilizing the selected image.
Abstract:
Techniques to capture and fuse short- and long-exposure images of a scene from a stabilized image capture device are disclosed. More particularly, the disclosed techniques use not only individual pixel differences between co-captured short- and long-exposure images, but also the spatial structure of occluded regions in the long-exposure images [e.g., areas of the long-exposure image(s) exhibiting blur due to scene object motion). A novel device used to represent this feature of the long-exposure image is a "spatial difference map." Spatial difference maps may be used to identify pixels in the short-and long-exposure images for fusion and, in one embodiment, may be used to identify pixels from the short-exposure image(s) to filter post-fusion so as to reduce visual discontinuities in the output image.
Abstract:
Systems, methods, and computer readable media to improve image stabilization operations are described. Novel approaches for fusing non- reference images with a pre-selected reference frame in a set of commonly captured images are disclosed. The fusing approach may use a soft transition by using a weighted average for ghost/non-ghost pixels to avoid sudden transition between neighborhood and almost similar pixels. Additionally, the ghost/non-ghost decision can be made based on a set of neighboring pixels rather than independently for each pixel. An alternative approach may involve performing a multi-resolution decomposition of all the captured images, using temporal fusion, spatio-temporal fusion, or combinations thereof, at each level and combining the different levels to generate an output image.
Abstract:
An apparatus, method, and computer-readable medium for motion sensor-based video stabilization. A motion sensor may capture motion data of a video sequence. A controller may compute instantaneous motion of the camera for a current frame of the video sequence and accumulated motion of the camera corresponding to motion of a plurality of frames of the video sequence. The controller may compare the instantaneous motion to a first threshold value, compare the accumulated motion to a second threshold value, and set a video stabilization strength parameter for the current frame based on the results of the comparison. A video stabilization unit may perform video stabilization on the current frame according to the frame's strength parameter.