Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for scoring concept terms using a deep network. One of the methods includes receiving an input comprising a plurality of features of a resource, wherein each feature is a value of a respective attribute of the resource; processing each of the features using a respective embedding function to generate one or more numeric values; processing the numeric values to generate an alternative representation of the features of the resource, wherein processing the floating point values comprises applying one or more non-linear transformations to the floating point values; and processing the alternative representation of the input to generate a respective relevance score for each concept term in a pre-determined set of concept terms, wherein each of the respective relevance scores measures a predicted relevance of the corresponding concept term to the resource.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a model using parameter server shards. One of the methods includes receiving, at a parameter server shard configured to maintain values of a disjoint partition of the parameters of the model, a succession of respective requests for parameter values from each of a plurality of replicas of the model; in response to each request, downloading a current value of each requested parameter to the replica from which the request was received; receiving a succession of uploads, each upload including respective delta values for each of the parameters in the partition maintained by the shard; and updating values of the parameters in the partition maintained by the parameter server shard repeatedly based on the uploads of delta values to generate current parameter values.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for training a model using parameter server shards. One of the methods includes receiving, at a parameter server shard configured to maintain values of a disjoint partition of the parameters of the model, a succession of respective requests for parameter values from each of a plurality of replicas of the model; in response to each request, downloading a current value of each requested parameter to the replica from which the request was received; receiving a succession of uploads, each upload including respective delta values for each of the parameters in the partition maintained by the shard; and updating values of the parameters in the partition maintained by the parameter server shard repeatedly based on the uploads of delta values to generate current parameter values.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on a computer storage medium, for optimizing content presentation. In one aspect, a system includes a training database that stores training data including attribute information about users and corresponding proxy metrics quantifying behavior by the users following content presentation; a content database; a model generator that accesses the training data and trains a model for content distribution; and a content distribution server that receives a content request, uses the model to select content, transmits data identifying the selected content, wherein the model: obtains a set of attributes for a user associated with the request, receives information about a given content, predicts a proxy metric based on the set of attributes and the information about the content, the predicted proxy metric providing information about subject retention or awareness; and identifies the given content for distribution if the predicted proxy metrics meet a threshold.
Abstract:
Methods and apparatus related to determining coreference resolution using distributed word representations. Distributed word representations, indicative of syntactic and semantic features, may be identified for one or more noun phrases. For each of the one or more noun phrases, a referring feature representation and an antecedent feature representation may be determined, where the referring feature representation includes the distributed word representation, and the antecedent feature representation includes the distributed word representation augmented by one or more antecedent features. In some implementations the referring feature representation may be augmented by one or more referring features. Coreference embeddings of the referring and antecedent feature representations of the one or more noun phrases may be learned. Distance measures between two noun phrases may be determined based on the coreference embeddings.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for scoring concept terms using a deep network. One of the methods includes receiving an input comprising a plurality of features of a resource, wherein each feature is a value of a respective attribute of the resource; processing each of the features using a respective embedding function to generate one or more numeric values; processing the numeric values using one or more neural network layers to generate an alternative representation of the features, wherein processing the floating point values comprises applying one or more non-linear transformations to the floating point values; and processing the alternative representation of the input using a classifier to generate a respective category score for each category in a pre-determined set of categories, wherein each of the respective category scores measure a predicted likelihood that the resource belongs to the corresponding category.
Abstract:
In general, in one aspect, a method includes compiling user interaction statistics for a set of content items displayed in association with a first target media document having a non-textual portion, at least some of the content items associated with one or more keywords, based on the interaction statistics, associating the first target media document with at least some of the keywords associated with the content items, and based on a common attribute of the first target media document and a second target media document having a non-textual portion, associating the second target media document with at least some of the keywords assigned to the first target media document. Other aspects include corresponding systems, apparatus, and computer programs stored on computer storage devices.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for computing numeric representations of words. One of the methods includes obtaining a set of training data, wherein the set of training data comprises sequences of words; training a classifier and an embedding function on the set of training data, wherein training the embedding function comprises obtained trained values of the embedding function parameters; processing each word in the vocabulary using the embedding function in accordance with the trained values of the embedding function parameters to generate a respective numerical representation of each word in the vocabulary in the high-dimensional space; and associating each word in the vocabulary with the respective numeric representation of the word in the high-dimensional space.
Abstract:
Methods, systems, and apparatus, including computer programs encoded on computer storage media, for scoring concept terms using a deep network. One of the methods includes receiving an input comprising a plurality of features of a resource, wherein each feature is a value of a respective attribute of the resource; processing each of the features using a respective embedding function to generate one or more numeric values; processing the numeric values using one or more neural network layers to generate an alternative representation of the features, wherein processing the floating point values comprises applying one or more non-linear transformations to the floating point values; and processing the alternative representation of the input using a classifier to generate a respective category score for each category in a pre-determined set of categories, wherein each of the respective category scores measure a predicted likelihood that the resource belongs to the corresponding category.
Abstract:
Advertisements are selected from a plurality of advertisements and associated with an advertisement environment in a document. Document advertisement request code that is configured to issue an advertisement request for one of the selected advertisements for presentation in the advertisement environment is stored in the document.