Abstract:
PROBLEM TO BE SOLVED: To solve a problem that individual or thumbnail display of digital photographs gives a viewer a very limited sense of what the person who has took the photographs has experienced when the images have been captured. SOLUTION: A method includes steps of: receiving a first image captured by an image capture device and first meta data related thereto, including geographical information for contents of the first image; receiving a second image and second metadata related thereto, including geographical information for contents of the second image; determining a viewpoint of the first image which represents the location and orientation of the image capture device when the first image has been captured; and creating a view including the first image and the second image. The placement of the first image in the view is based on the first metadata and the viewpoint of the first image, and the placement of the second image relative to the first image is based on the second metadata and the viewpoint of the first image. COPYRIGHT: (C)2011,JPO&INPIT
Abstract:
In some embodiments, a method of processing a video sequence may include receiving an input video sequence having an input video sequence resolution, aligning images from the input video sequence, reducing noise in the aligned images, and producing an output video sequence from the reduced noise images, wherein the output video sequence has the same resolution as the input video sequence resolution. Other embodiments are disclosed and claimed.
Abstract:
Ausführungsformen der Erfindung beschreiben das Verarbeiten von ersten Bilddaten und 3D-Punktwolkendaten zum Extrahieren eines ersten Flächensegments aus den 3D-Punktwolkendaten. Dieses erste Flächensegment ist mit einem in den ersten Bilddaten enthaltenen Objekt verknüpft. Zweite Bilddaten werden empfangen, wobei die zweiten Bilddaten das in den ersten Bilddaten erfasste Objekt enthalten. Ein zweites zu dem Objekt zugehöriges Flächensegment wird erzeugt, wobei das zweite Flächensegment mit dem Objekt, wie es in den zweiten Bilddaten erfasst wurde, geometrisch übereinstimmt. Dieses Flächensegment wird zumindest teilweise basierend auf den zweiten Bilddaten, den ersten Bilddaten und dem ersten Flächensegment erzeugt. Ausführungsformen der Erfindung können die zweiten Bilddaten mit Inhalten erweitern, die mit dem Objekt verknüpft sind. Dieses erweiterte Bild kann derart angezeigt werden, dass die Inhalte geometrisch übereinstimmend mit dem zweiten Flächensegment angezeigt werden.
Abstract:
In einigen Ausführungsformen kann ein Verfahren zur Verarbeitung einer Videosequenz, Empfangen einer Eingangsvideosequenz mit einer Eingangsvideosequenzauflösung, Ausrichten von Bildern aus der Eingangsvideosequenz, Reduzieren von Rauschen in den ausgerichteten Bildern und Erzeugen einer Ausgangsvideosequenz aus den Bildern mit reduziertem Rauschen enthalten, wobei die Ausgangsvideosequenz dieselbe Auflösung wie die Eingangsvideosequenzauflösung aufweist. Es werden weitere Ausführungsformen offenbart und beansprucht.
Abstract:
A method for generating high accuracy estimates of the 3D orientation of a video camera 110 within a global frame of reference comprises the following steps. A first type of orientation estimates 160 are taken from a camera-mounted orientation sensor such as an accelerometer or gyroscope. They are then formatted as an orientation time series 170 and input to a low pass filter 180. A second type of orientation estimates 130 are produced from a video sequence 120 using an image-based alignment method. Successive images of the video sequence 120 are aligned using an estimated inter-frame 3D camera rotation and are subjected to a high pass filtering operation 150. The outputs of the high pass and low pass filters are combined, producing a stabilized three dimensional camera orientation 185 which is used to produce an output video sequence 195. The method of estimating 3D camera rotation preferably involves computing a Gaussian multi-resolution representation (MRR) of the successive images.
Abstract:
Embodiments of the invention describe processing a first image data and 3D point cloud data to extract a first planar segment from the 3D point cloud data. This first planar segment is associated with an object included in the first image data. A second image data is received, the second image data including the object captured in the first image data. A second planar segment related to the object is generated, where the second planar segment is geometrically consistent with the object as captured in the second image data. This planar segment is generated based, at least in part, on the second image data, the first image data and the first planar segment. Embodiments of the invention may further augment the second image data with content associated with the object. This augmented image may be displayed such that the content is displayed geometrically consistent with the second planar segment.
Abstract:
Methods and systems to create an image in which objects at different focal depths all appear to be in focus. In an embodiment, all objects in the scene may appear in focus. Non-stationary cameras may be accommodated, so that variations in the scene resulting from camera jitter or other camera motion may be tolerated. An image alignment process may be used, and the aligned images may be blended using a process that may be implemented using logic that has relatively limited performance capability. The blending process may take a set of aligned input images and convert each image into a simplified Laplacian pyramid (LP). The LP is a data structure that includes several processed versions of the image, each version being of a different size. The set of aligned images is therefore converted into a corresponding set of LPs. The LPs may be combined into a composite LP, which may then undergo Laplacian pyramid reconstruction (LPR). The output of the LPR process is the final blended image.
Abstract:
In some embodiments, a method of processing a video sequence may include receiving an input video sequence having an input video sequence resolution, aligning images from the input video sequence, reducing noise in the aligned images, and producing an output video sequence from the reduced noise images, wherein the output video sequence has the same resolution as the input video sequence resolution. Other embodiments are disclosed and claimed.
Abstract:
In some embodiments, a method of processing a video sequence may include receiving an input video sequence having an input video sequence resolution, aligning images from the input video sequence, reducing noise in the aligned images, and producing an output video sequence from the reduced noise images, wherein the output video sequence has the same resolution as the input video sequence resolution. Other embodiments are disclosed and claimed.