REALTIME RESPONSE TO NETWORK-TRANSFERRED CONTENT REQUESTS USING STATISTICAL PREDICTION MODELS

    公开(公告)号:US20190034809A1

    公开(公告)日:2019-01-31

    申请号:US15662686

    申请日:2017-07-28

    Abstract: Techniques for leveraging existing statistical prediction models are provided. A first statistical prediction model is generated for a content item. An instruction is received to create a clone from the content item. In response to receiving the instruction, the clone is created based on attributes of the content item. A second statistical prediction model that is different than the first statistical prediction model is generated for the clone. In response to receiving a request for content, the clone is identified as relevant to the first request. A similarity between (1) first content of the content item and (2) second content of the clone is determined. If the similarity exceeds a similarity threshold, then the first statistical prediction model is used to generate a prediction of an entity user selection rate associated with the clone. Otherwise, the second statistical prediction model is used to generate the prediction.

    Cache management for multi-node databases

    公开(公告)号:US10346304B2

    公开(公告)日:2019-07-09

    申请号:US15659544

    申请日:2017-07-25

    Abstract: Techniques related to cache management for multi-node databases are disclosed. In some embodiments, a system comprises one or more computing devices including a training component, data store, cache, filtering component, and listening component. The training component produces a plurality of models based on user interaction data. The plurality of models are stored in the data store, which responds to requests from the cache when the cache experiences cache misses. The cache stores a first subset of the plurality of models. The filtering component selects a second subset of the plurality of models based on one or more criteria. Furthermore, the filtering component sends the second subset of the plurality of models to a messaging service. The listening component retrieves the second subset of the plurality of models from the messaging service. Furthermore, the listening component causes the second subset of the plurality of models to be stored in the cache.

    INCREMENTAL WORKFLOW EXECUTION
    5.
    发明申请

    公开(公告)号:US20190087238A1

    公开(公告)日:2019-03-21

    申请号:US15706225

    申请日:2017-09-15

    Abstract: Techniques for incremental workflow execution are provided. In one technique, a computing job in a workflow identifies an input path that indicates a first location from which the computing job is to read input data. The computing job identifies an output path that indicates a second location to which the computing job is to write output data. The computing job performs a comparison between the input path and the output path. Based on the comparison, the computing job determines whether to read the input data from the first location. If the input path does not correspond to the output path, then the computing job reads the input data from the first location, generates particular output data based on the input data, and writes the particular output data to the second location. The computing job ceases to execute if the input path corresponds to the output path.

    CACHE MANAGEMENT FOR MULTI-NODE DATABASES
    6.
    发明申请

    公开(公告)号:US20190034338A1

    公开(公告)日:2019-01-31

    申请号:US15659544

    申请日:2017-07-25

    Abstract: Techniques related to cache management for multi-node databases are disclosed. In some embodiments, a system comprises one or more computing devices including a training component, data store, cache, filtering component, and listening component. The training component produces a plurality of models based on user interaction data. The plurality of models are stored in the data store, which responds to requests from the cache when the cache experiences cache misses. The cache stores a first subset of the plurality of models. The filtering component selects a second subset of the plurality of models based on one or more criteria. Furthermore, the filtering component sends the second subset of the plurality of models to a messaging service. The listening component retrieves the second subset of the plurality of models from the messaging service. Furthermore, the listening component causes the second subset of the plurality of models to be stored in the cache.

    Realtime response to network-transferred content requests using statistical prediction models

    公开(公告)号:US11049022B2

    公开(公告)日:2021-06-29

    申请号:US15662686

    申请日:2017-07-28

    Abstract: Techniques for leveraging existing statistical prediction models are provided. A first statistical prediction model is generated for a content item. An instruction is received to create a clone from the content item. In response to receiving the instruction, the clone is created based on attributes of the content item. A second statistical prediction model that is different than the first statistical prediction model is generated for the clone. In response to receiving a request for content, the clone is identified as relevant to the first request. A similarity between (1) first content of the content item and (2) second content of the clone is determined. If the similarity exceeds a similarity threshold, then the first statistical prediction model is used to generate a prediction of an entity user selection rate associated with the clone. Otherwise, the second statistical prediction model is used to generate the prediction.

    Incremental workflow execution
    8.
    发明授权

    公开(公告)号:US10409651B2

    公开(公告)日:2019-09-10

    申请号:US15706225

    申请日:2017-09-15

    Abstract: Techniques for incremental workflow execution are provided. In one technique, a computing job in a workflow identifies an input path that indicates a first location from which the computing job is to read input data. The computing job identifies an output path that indicates a second location to which the computing job is to write output data. The computing job performs a comparison between the input path and the output path. Based on the comparison, the computing job determines whether to read the input data from the first location. If the input path does not correspond to the output path, then the computing job reads the input data from the first location, generates particular output data based on the input data, and writes the particular output data to the second location. The computing job ceases to execute if the input path corresponds to the output path.

    DEEP NEURAL NETWORK ARCHITECTURE FOR SEARCH
    9.
    发明申请

    公开(公告)号:US20190251422A1

    公开(公告)日:2019-08-15

    申请号:US15941314

    申请日:2018-03-30

    CPC classification number: G06N3/0454 G06F16/24578 G06N3/04 G06N3/08

    Abstract: Techniques for implementing a deep neural network architecture for search are disclosed herein. In some embodiments, the deep neural network architecture comprises: an item neural network configured to, for each one of a plurality of items, generate an item vector representation based on item data of the one of the plurality of items; a query neural network configured to generate a query vector representation for a query based on the query, the query neural network being distinct from the item neural network; and a scoring neural network configured to, for each one of the plurality of items, generate a corresponding score for a pairing of the one of the plurality of items and the query based on the item vector representation of the one of the plurality of items and the query vector representation, the scoring neural network being distinct from the item neural network and the query neural network.

    SIMULATING PERFORMANCE OF A NETWORK-TRANSFERRED ELECTRONIC CONTENT ITEM

    公开(公告)号:US20180315082A1

    公开(公告)日:2018-11-01

    申请号:US15581836

    申请日:2017-04-28

    Abstract: Techniques for simulating performance of a content delivery campaign are provided. In one technique, multiple entities that satisfy one or more criteria associated with a content delivery campaign are identified. For each entity, multiple content item selection events in which that entity participated are identified and data associated with each of the content item selection events are aggregated to generate aggregated data. The aggregated data associated with each entity is combined to generate combined aggregated data. The combined aggregated data is adjusted based on an actual performance value of the content delivery campaign to generate adjusted aggregated data. In response to receiving input, determining, based on the adjusted aggregated data and the input, a simulated performance of the content delivery campaign.

Patent Agency Ranking