Abstract:
Systems and methods prevent or restrict the mining of content on a mobile device. For example, a method may include determining that content to be displayed on a screen includes content that matches a mining-restriction trigger, inserting a mining-restriction mark in the content that protects at least a portion of the content, and displaying the content with the mining-restriction mark on the screen. As another example, a method may include identifying, by a first application running on a mobile device, a mining-restriction mark in frame buffer data, the mining-restriction mark having been inserted by a second application, and determining whether the mining-restriction mark prevents mining of content. The method may also include preventing mining when the mining-restriction mark prevents mining and, when the mining-restriction mark does not prevent mining, determining a restriction for the data based on the mining-restriction mark and providing the restriction with the data for further processing.
Abstract:
A method and apparatus for enabling dynamic product and vendor identification and the display of relevant purchase information are described herein. According to embodiments of the invention, a recognition process is executed on sensor data captured via a mobile computing device to identify one or more items, and to identify at least one product associated with the one or more items. Product and vendor information for the at least one product is retrieved and displayed via the mobile computing device. In the event a user gesture is detected in response to displaying the product and vendor information data, processing logic may submit a purchase order for the product (e.g., for an online vendor) or contact the vendor (e.g., for an in-store vendor).
Abstract:
A system and method of identifying objects is provided. In one aspect, the system and method includes a hand-held device with a display, camera and processor. As the camera captures images (911, 921, 931) and displays them on the display, the processor compares the information retrieved in connection with one image (913-16) with information retrieved in connection with subsequent images (923-26). The processor uses the result of such comparison to determine the object that is likely to be of greatest interest to the user. The display simultaneously displays the images as they are captured, the location of the object in an image, and information retrieved for the object.
Abstract:
A server system receives a visual query from a client system. The visual query is an image containing text such as a picture of a document. At the receiving server or another server, optical character recognition (OCR) is performed on the visual query to produce text recognition data representing textual characters. Each character in a contiguous region of the visual query is individually scored according to its quality. The quality score of a respective character is influenced by the quality scores of neighboring or nearby characters. Using the scores, one or more high quality strings of characters are identified. Each high quality string has a plurality of high quality characters. A canonical source document matching the visual query that contains the one or more high quality textual strings is identified and retrieved. Then at least a portion of the canonical document is sent to the client system.
Abstract:
Systems and methods are provided for suggesting actions for entities discovered in content on a mobile device. An example method can include running a mobile device emulator with a deep-link for a mobile application, determining a main entity for the deep link, mapping the main entity to the deep link, storing the mapping of the main entity to the deep link in a memory, and providing the mapping to a mobile device, the mapping enabling a user of the mobile device to select the deep link when the main entity is displayed on a screen of the mobile device. Another example method can include identifying at least one entity in content generated by a mobile application, identifying an action mapped to the at least one entity, the action representing a deep link into a second mobile application, and providing a control to initiate the action for the entity.
Abstract:
Methods, systems, and apparatus include computer programs encoded on a computer-readable storage medium, including a method for providing content. Snapshots associated with use of a computing device by a user are received. Each snapshot is based on content presented to the user. The snapshots are evaluated. For each respective snapshot, a respective set of entities indicated by the respective snapshot is identified. Indications of the respective set of entities and a respective timestamp indicating a respective time that the respective snapshot was captured are associated and stored. Based on a first snapshot of the snapshots, a first time to present one or more information cards to the user is determined. At the first time, entities having a time stamp that corresponds to the first time are located. An information card is generated based on the located entities. The generated information card is provided for presentation to the user.
Abstract:
A method and apparatus for enabling a searchable history of real- world user experiences is described. The method may include capturing media data by a mobile computing device. The method may also include transmitting the captured media data to a server computer system, the server computer system to perform one or more recognition processes on the captured media data and add the captured media data to a history of real- world experiences of a user of the mobile computing device when the one or more recognition processes find a match. The method may also include transmitting a query of the user to the server computer system to initiate a search of the history or real-world experiences, and receiving results relevant to the query that include data indicative of the media data in the history of real- world experiences.
Abstract:
A server system receives a visual query from a client system. The visual query is an image containing text such as a picture of a document. At the receiving server or another server, optical character recognition (OCR) is performed on the visual query to produce text recognition data representing textual characters. Each character in a contiguous region of the visual query is individually scored according to its quality. The quality score of a respective character is influenced by the quality scores of neighboring or nearby characters. Using the scores, one or more high quality strings of characters are identified. Each high quality string has a plurality of high quality characters. A canonical source document matching the visual query that contains the one or more high quality textual strings is identified and retrieved. Then at least a portion of the canonical document is sent to the client system.
Abstract:
A client system receives an image such as a photograph, a screen shot, a scanned image, or a video frame. The image has a first resolution which is likely larger than a maximum resolution for visual queries. As such, if a visual query were created from the image some resolution would be lost. Instead, a user selects a region of interest within the image. The region of interest has a second resolution, which is smaller than the first resolution. The client system then creates a visual query from the region of interest. The visual query has a resolution no larger than a pre-defined maximum resolution for visual queries. Because the visual query is created from the region of interest rather, than the entire received image, most of the resolution is concentrated specifically on the region of interest. The visual query is then sent to a server system.
Abstract:
A server system receives a visual query from a client system. The visual query is an image containing text such as a picture of a document. At the receiving server or another server, optical character recognition (OCR) is performed on the visual query to produce text recognition data representing textual characters. Each character in a contiguous region of the visual query is individually scored according to its quality. The quality score of a respective character is influenced by the quality scores of neighboring or nearby characters. Using the scores, one or more high quality strings of characters are identified. Each high quality string has a plurality of high quality characters. A canonical document containing the one or more high quality textual strings is retrieved. At least a portion of the canonical document is sent to the client system.