Abstract:
A computer-implemented method and system for configuring a mobile device is provided. The method includes detecting location information for a location of a mobile device, determining if the detected location information is associated with a location based profile, wherein the location based profile defines a set of one or more applications for display on a homepage of the mobile device based on the location of the mobile device, selecting the location based profile associated with the detected location information when the detected location information is associated with a location based profile and configuring the mobile device based on the selected location based profile, such that activation icons of the set of one or more applications are provided for display on the homepage of the mobile device.
Abstract:
Aspects of the disclosure relate generally to using a primary and secondary authentication to provide a user with access to protected information or features. To do so, a computing device may generate depth data based on a plurality of images of a user. The computing device may then compare the generated depth data to pre-stored depth data that was generated based on a pre-stored plurality of images. If authentication is successful, the user may be granted access to features of the computing device. If authentication is unsuccessful, then a secondary authentication may be performed. The secondary authentication may compare facial features of a captured image of the user to facial features of a pre-stored image of the user. If authentication is successful, then the primary authentication may be performed again. This second time, the user may be granted access if authentication is successful, or denied access if authentication is unsuccessful.
Abstract:
Embodiments of this invention relate to detecting and blurring images. In an embodiment, a system detects objects in a photographic image. The system includes an object detector module configured to detect regions of the photographic image that include objects of a particular type at least based on the content of the photographic image. The system further includes a false positive detector module configured to determine whether each region detected by the object detector module includes an object of the particular type at least based on information about the context in which the photographic image was taken.
Abstract:
The present invention relates to annotating images. In an embodiment, the present invention enables users to create annotations corresponding to three-dimensional objects while viewing two-dimensional images. In one embodiment, this is achieved by projecting a selecting object onto a three-dimensional model created from a plurality of two-dimensional images. The selecting object is input by a user while viewing a first image corresponding to a portion of the three-dimensional model. A location corresponding to the projection on the three-dimensional model is determined, and content entered by the user while viewing the first image is associated with the location. The content is stored together with the location information to form an annotation. The annotation can be retrieved and displayed together with other images corresponding to the location.
Abstract:
A panorama viewer is disclosed which facilitates navigation from within the panorama of a larger, structured system such as a map. The panorama viewer presents a viewport on a portion of a panoramic image, the viewport including a three-dimensional overlay rendered with the panoramic image. As the orientation of the viewport within the panoramic image changes, the three-dimensional overlay's orientation in three-dimensional space also changes as it is rendered with the panoramic image in a manner that matches the change in orientation of the viewport.
Abstract:
Methods and systems for generating panoramic images using images captured from, for instance, a camera-enabled mobile device (e.g., a smartphone, tablet, wearable computing device, or other device) are provided. More particularly, a panoramic image can be generated from images simultaneously captured from at least two cameras facing in different directions, such as a front facing camera and a rear facing camera of a camera-enabled mobile device. The images can be captured while the photographer rotates the device about an axis. The panoramic image can then be generated from the images captured from the different cameras. The first set of images and the second set of images can be calibrated to accommodate for different positions of the cameras on the device and/or can be processed to accommodate for different resolutions of the images captured by the different cameras.
Abstract:
A panorama viewer is disclosed which facilitates navigation from within the panorama of a larger, structured system such as a map. The panorama viewer presents a viewport on a portion of a panoramic image, the viewport including a three-dimensional overlay rendered with the panoramic image. As the orientation of the viewport within the panoramic image changes, the three-dimensional overlay's orientation in three-dimensional space also changes as it is rendered with the panoramic image in a manner that matches the change in orientation of the viewport.
Abstract:
Systems and methods of the present disclosure provide techniques for providing user-specified ways of navigating through real-world three-dimensional geographic imagery that spans space and time. An exemplary method includes identifying a plurality of images depicting a geographic location at street level. The images are captured at the geographic location over a span of time. Using a processor, image data is associated with the plurality of images. The image data includes information representing positional data and a time dimension related to the plurality of images. Using the processor, a user's navigational intent to move back and forward through the time dimension is predicted based on a navigational signal. The exemplary method further includes selecting a set of images from the plurality of images based on the image data and the predicted navigational intent. The set of images depict conditions at the geolocation for one or more time periods.
Abstract:
Methods, systems, and apparatus including computer program products for using extracted image text are provided. In one implementation, a computer-implemented method is provided. The method includes receiving an input of one or more image search terms and identifying keywords from the received one or more image search terms. The method also includes searching a collection of keywords including keywords extracted from image text, retrieving an image associated with extracted image text corresponding to one or more of the image search terms, and presenting the image.