Phrase extraction for optimizing digital page

    公开(公告)号:US11048876B2

    公开(公告)日:2021-06-29

    申请号:US16206292

    申请日:2018-11-30

    Abstract: Techniques for improving the accuracy, relevancy, and efficiency of a computer system of an online service by providing a user interface to optimize a digital page of a user on the online service are disclosed herein. In some embodiments, a computer system receives a plurality of phrases for a type of job, selects a group of phrases from the plurality of phrases based on a corresponding relevancy measurement and a corresponding diversity measurement for each phrase in the selected group of phrases, and generates a recommendation for a page of a first user based on the selected group of phrases, with the recommendation comprising a suggested addition of the selected group of phrases to the page of the first user.

    ACTIVE LEARNING MODEL TRAINING FOR PAGE OPTIMIZATION

    公开(公告)号:US20200175394A1

    公开(公告)日:2020-06-04

    申请号:US16206387

    申请日:2018-11-30

    Abstract: Techniques for improving the accuracy, relevancy, and efficiency of a computer system of an online service by providing a user interface to optimize a digital page of a user on the online service are disclosed herein. In some embodiments, a computer system trains a classifier using a first plurality of training data, and then, for each one of a first plurality of sample data, generates a corresponding likelihood value indicating a likelihood that the one of the first plurality of sample data corresponds to a measurable accomplishment using the trained classifier, identifies a portion of the first plurality of sample data as corresponding to confused predictions based on the corresponding likelihood values of the portion of the first plurality of sample data and a confusion criteria, and retrains the trained classifier using a second plurality of training data that includes the portion of the first plurality of sample data.

    Cache management for multi-node databases

    公开(公告)号:US10346304B2

    公开(公告)日:2019-07-09

    申请号:US15659544

    申请日:2017-07-25

    Abstract: Techniques related to cache management for multi-node databases are disclosed. In some embodiments, a system comprises one or more computing devices including a training component, data store, cache, filtering component, and listening component. The training component produces a plurality of models based on user interaction data. The plurality of models are stored in the data store, which responds to requests from the cache when the cache experiences cache misses. The cache stores a first subset of the plurality of models. The filtering component selects a second subset of the plurality of models based on one or more criteria. Furthermore, the filtering component sends the second subset of the plurality of models to a messaging service. The listening component retrieves the second subset of the plurality of models from the messaging service. Furthermore, the listening component causes the second subset of the plurality of models to be stored in the cache.

    INCREMENTAL WORKFLOW EXECUTION
    25.
    发明申请

    公开(公告)号:US20190087238A1

    公开(公告)日:2019-03-21

    申请号:US15706225

    申请日:2017-09-15

    Abstract: Techniques for incremental workflow execution are provided. In one technique, a computing job in a workflow identifies an input path that indicates a first location from which the computing job is to read input data. The computing job identifies an output path that indicates a second location to which the computing job is to write output data. The computing job performs a comparison between the input path and the output path. Based on the comparison, the computing job determines whether to read the input data from the first location. If the input path does not correspond to the output path, then the computing job reads the input data from the first location, generates particular output data based on the input data, and writes the particular output data to the second location. The computing job ceases to execute if the input path corresponds to the output path.

    CACHE MANAGEMENT FOR MULTI-NODE DATABASES
    26.
    发明申请

    公开(公告)号:US20190034338A1

    公开(公告)日:2019-01-31

    申请号:US15659544

    申请日:2017-07-25

    Abstract: Techniques related to cache management for multi-node databases are disclosed. In some embodiments, a system comprises one or more computing devices including a training component, data store, cache, filtering component, and listening component. The training component produces a plurality of models based on user interaction data. The plurality of models are stored in the data store, which responds to requests from the cache when the cache experiences cache misses. The cache stores a first subset of the plurality of models. The filtering component selects a second subset of the plurality of models based on one or more criteria. Furthermore, the filtering component sends the second subset of the plurality of models to a messaging service. The listening component retrieves the second subset of the plurality of models from the messaging service. Furthermore, the listening component causes the second subset of the plurality of models to be stored in the cache.

Patent Agency Ranking