Abstract:
Methods and systems for rolling shutter removal are described. A computing device may be configured to determine, in a frame of a video, distinguishable features. The frame may include sets of pixels captured asynchronously. The computing device may be configured to determine for a pixel representing a feature in the frame, a corresponding pixel representing the feature in a consecutive frame; and determine, for a set of pixels including the pixel in the frame, a projective transform that may represent motion of the camera. The computing device may be configured to determine, for the set of pixels in the frame, a mixture transform based on a combination of the projective transform and respective projective transforms determined for other sets of pixels. Accordingly, the computing device may be configured to estimate a motion path of the camera to account for distortion associated with the asynchronous capturing of the sets of pixels.
Abstract:
Methods and systems for rolling shutter removal are described. A computing device may be configured to determine, in a frame of a video, distinguishable features. The frame may include sets of pixels captured asynchronously. The computing device may be configured to determine for a pixel representing a feature in the frame, a corresponding pixel representing the feature in a consecutive frame; and determine, for a set of pixels including the pixel in the frame, a projective transform that may represent motion of the camera. The computing device may be configured to determine, for the set of pixels in the frame, a mixture transform based on a combination of the projective transform and respective projective transforms determined for other sets of pixels. Accordingly, the computing device may be configured to estimate a motion path of the camera to account for distortion associated with the asynchronous capturing of the sets of pixels.
Abstract:
Methods and devices for initiating, updating, and displaying the results of a search of an object-model database are disclosed. In one embodiment, a method is disclosed that includes receiving video data recorded by a camera on a wearable computing device and, based on the video data, detecting a movement corresponding to a selection of an object. The method further includes, before the movement is complete, initiating a search on the object of an object-model database. The method still further includes, during the movement, periodically updating the search and causing the wearable computing device to overlay the object with object-models from the database corresponding to results of the search.
Abstract:
Methods and devices for initiating a search are disclosed. In one embodiment, a method is disclosed that includes causing a camera on a wearable computing device to record video data, segmenting the video data into a number of layers and, based on the video data, detecting that a pointing object is in proximity to a first layer. The method further includes initiating a first search on the first layer. In another embodiment, a wearable computing device is disclosed that includes a camera configured to record video data, a processor, and data storage comprising instructions executable by the processor to segment the video data into a number of layers and, based on the video data, detect that a pointing object is in proximity to a first layer. The instructions are further executable by the processor to initiate a first search on the first layer.
Abstract:
Methods and devices for initiating a search of an object are disclosed. In one embodiment, a method is disclosed that includes receiving video data recorded by a camera on a wearable computing device, where the video data comprises at least a first frame and a second frame. The method further includes, based on the video data, detecting an area in the first frame that is at least partially bounded by a pointing device and, based on the video data, detecting in the second frame that the area is at least partially occluded by the pointing device. The method still further includes initiating a search on the area.
Abstract:
Methods and devices for initiating a search of an object are disclosed. In one embodiment, a method is disclosed that includes receiving video data recorded by a camera on a wearable computing device, where the video data comprises at least a first frame and a second frame. The method further includes, based on the video data, detecting an area in the first frame that is at least partially bounded by a pointing device and, based on the video data, detecting in the second frame that the area is at least partially occluded by the pointing device. The method still further includes initiating a search on the area.
Abstract:
An easy-to-use online video stabilization system and methods for its use are described. Videos are stabilized after capture, and therefore the stabilization works on all forms of video footage including both legacy video and freshly captured video. In one implementation, the video stabilization system is fully automatic, requiring no input or parameter settings by the user other than the video itself. The video stabilization system uses a cascaded motion model to choose the correction that is applied to different frames of a video. In various implementations, the video stabilization system is capable of detecting and correcting high frequency jitter artifacts, low frequency shake artifacts, rolling shutter artifacts, significant foreground motion, poor lighting, scene cuts, and both long and short videos.
Abstract:
An easy-to-use online video stabilization system and methods for its use are described. Videos are stabilized after capture, and therefore the stabilization works on all forms of video footage including both legacy video and freshly captured video. In one implementation, the video stabilization system is fully automatic, requiring no input or parameter settings by the user other than the video itself. The video stabilization system uses a cascaded motion model to choose the correction that is applied to different frames of a video. In various implementations, the video stabilization system is capable of detecting and correcting high frequency jitter artifacts, low frequency shake artifacts, rolling shutter artifacts, significant foreground motion, poor lighting, scene cuts, and both long and short videos.
Abstract:
A method for localizing the attention of a user of a first-person point-of-view (FPPOV) device is disclosed. The method includes receiving a plurality of images of an event, each image having been captured by one of a plurality of reference cameras during a first time duration. The method further includes receiving a first user-captured image captured by the FPPOV device during the first time duration. A first image of the plurality of images is selected as a best-matched image, based on the first user-captured image, for capturing a region-of-interest.
Abstract:
An easy-to-use online video stabilization system and methods for its use are described. Videos are stabilized after capture, and therefore the stabilization works on all forms of video footage including both legacy video and freshly captured video. In one implementation, the video stabilization system is fully automatic, requiring no input or parameter settings by the user other than the video itself. The video stabilization system uses a cascaded motion model to choose the correction that is applied to different frames of a video. In various implementations, the video stabilization system is capable of detecting and correcting high frequency jitter artifacts, low frequency shake artifacts, rolling shutter artifacts, significant foreground motion, poor lighting, scene cuts, and both long and short videos.