Abstract:
This technology relates to requesting missing data associated with a geographic location. The system may comprise a memory for storing lists of information elements each associated with one or more physical types of geographic locations and a processor. The processor may be programmed to receive a current location of a client device and determine the current location is proximate to a first geographic location. The processor may determine a physical location type of the first geographic location and retrieve a list of information elements associated with the physical location type. The list of items may be compared to a set of information elements associated with the first geographic location and a determination of a missing information element may be made. A notification may be generated based on the missing information element and the determination that the client device is proximate to the first geographic location.
Abstract:
Methods, systems, and apparatus including computer program products for using extracted image text are provided. In one implementation, a computer-implemented method is provided. The method includes receiving an input of one or more image search terms and identifying keywords from the received one or more image search terms. The method also includes searching a collection of keywords including keywords extracted from image text, retrieving an image associated with extracted image text corresponding to one or more of the image search terms, and presenting the image.
Abstract:
Methods, systems, and apparatus including computer program products for using extracted image text are provided. In one implementation, a computer-implemented method is provided. The method includes receiving an input of one or more image search terms and identifying keywords from the received one or more image search terms. The method also includes searching a collection of keywords including keywords extracted from image text, retrieving an image associated with extracted image text corresponding to one or more of the image search terms, and presenting the image.
Abstract:
The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
Abstract:
The present invention relates to using image content to facilitate navigation in panoramic image data. In an embodiment, a computer-implemented method for navigating in panoramic image data includes: (1) determining an intersection of a ray and a virtual model, wherein the ray extends from a camera viewport of an image and the virtual model comprises a plurality of facade planes; (2) retrieving a panoramic image; (3) orienting the panoramic image to the intersection; and (4) displaying the oriented panoramic image.
Abstract:
A panorama viewer is disclosed which facilitates navigation from within the panorama of a larger, structured system such as a map. The panorama viewer presents a viewport on a portion of a panoramic image, the viewport including a three-dimensional overlay rendered with the panoramic image. As the orientation of the viewport within the panoramic image changes, the three-dimensional overlay's orientation in three-dimensional space also changes as it is rendered with the panoramic image in a manner that matches the change in orientation of the viewport.
Abstract:
The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
Abstract:
The technology uses image content to facilitate navigation in panoramic image data. Aspects include providing a first image including a plurality of avatars, in which each avatar corresponds to an object within the first image, and determining an orientation of at least one of the plurality of avatars to a point of interest within the first image. A viewport is determined for a first avatar in accordance with the orientation thereof relative to the point of interest, which is included within the first avatar's viewport. In response to received user input, a second image is selected that includes at least a second avatar and the point of interest from the first image. A viewport of the second avatar in the second image is determined and the second image is oriented to align the second avatar's viewpoint with the point of interest to provide navigation between the first and second images.
Abstract:
Methods, systems, and apparatus including computer program products for using extracted image text are provided. In one implementation, a computer-implemented method is provided. The method includes receiving an input of one or more image search terms and identifying keywords from the received one or more image search terms. The method also includes searching a collection of keywords including keywords extracted from image text, retrieving an image associated with extracted image text corresponding to one or more of the image search terms, and presenting the image.
Abstract:
A street-level imagery acquisition and selection process identifies which images are published in a street field view. An imagery database includes panoramas each corresponding to a set of images acquired from a single viewpoint. The panoramas are attached to corresponding positions on a road network graph. The graph is divided into a set of selection paths, each of which includes a topologically linear sequence of road segments. Each selection path is evaluated to select a set of panoramas to be published in the path. Panoramas of interior road segments are selected before panoramas at intersections. Selected panorama identifiers for each interior road segment of the selection paths and each intersection correspond to a position along the road network graph. The selected panorama identifiers are then published in the street field view.