Scheduling jobs on graphical processing units

    公开(公告)号:US11651470B2

    公开(公告)日:2023-05-16

    申请号:US17360122

    申请日:2021-06-28

    CPC classification number: G06T1/20

    Abstract: Example implementations relate to scheduling of jobs for a plurality of graphics processing units (GPUs) providing concurrent processing by a plurality of virtual GPUs. According to an example, a computing system including one or more GPUs receives a request to schedule a new job to be executed by the computing system. The new job is allocated to one or more vGPUs. Allocations of existing jobs are updated to one or more vGPUs. Operational cost of operating the one or more GPUs and migration cost of allocating the new job are minimized and allocations of the existing jobs on the one or more vGPUs is updated. The new job and the existing jobs are processed by the one or more GPUs in the computing system.

    Burst packet preload for ABW estimation with rate limiters

    公开(公告)号:US11502965B2

    公开(公告)日:2022-11-15

    申请号:US17016329

    申请日:2020-09-09

    Abstract: Systems and methods are provided for performing burst packet preloading for Available Bandwidth (ABW) estimation, that may include: preparing a chirp train to be used for ABW estimation, the chirp train comprising a quantity of original probe packets; determining a quantity of additional probe packets that will transition the network path from a short-term mode into a long-term mode; inserting the determined quantity of additional probe packets at the beginning of the chirp train; and transmitting the chirp train, including the determined quantity of additional probe packets on the network path, to a receiver that can perform ABW estimation of the network path.

    BURST PACKET PRELOAD FOR ABW ESTIMATION WITH RATE LIMITERS

    公开(公告)号:US20220078127A1

    公开(公告)日:2022-03-10

    申请号:US17016329

    申请日:2020-09-09

    Abstract: Systems and methods are provided for performing burst packet preloading for Available Bandwidth (ABW) estimation, that may include: preparing a chirp train to be used for ABW estimation, the chirp train comprising a quantity of original probe packets; determining a quantity of additional probe packets that will transition the network path from a short-term mode into a long-term mode; inserting the determined quantity of additional probe packets at the beginning of the chirp train; and transmitting the chirp train, including the determined quantity of additional probe packets on the network path, to a receiver that can perform ABW estimation of the network path.

    AVAILABLE NETWORK BANDWIDTH ESTIMATION USING A ONE-WAY-DELAY NOISE FILTER WITH BUMP DETECTION

    公开(公告)号:US20210352001A1

    公开(公告)日:2021-11-11

    申请号:US17282838

    申请日:2018-11-01

    Abstract: Systems and methods are provided for available network bandwidth estimation using a one-way-delay noise filter with bump detection. The method includes receiving one-way delay measurements for each probe packet in a probe train sent over the telecommunications path; grouping the probe packets into a plurality of pairs based on the one-way delay measurements; for each pair, computing a respective noise threshold based on the one-way delay measurements of all the probe packets transmitted after a later-transmitted probe packet of the pair; selecting one of the pairs according to the noise thresholds and the one-way delay measurements for the probe packets of the pairs; and estimating the available bandwidth on the telecommunications path based on transmission times of the probe packets in the selected pair.

    PROACTIVELY ACCOMODATING PREDICTED FUTURE SERVERLESS WORKLOADS USING A MACHINE LEARNING PREDICTION MODEL AND A FEEDBACK CONTROL SYSTEM

    公开(公告)号:US20210184941A1

    公开(公告)日:2021-06-17

    申请号:US16714637

    申请日:2019-12-13

    Abstract: Example implementations relate to a proactive auto-scaling approach. According to an example, a target performance metric for an application running in a serverless framework of a private cloud is received. A machine-learning prediction model is trained to forecast future serverless workloads during a window of time for the application based on historical serverless workload information. The serverless framework is monitored to obtain serverless workload observations for the application. A future serverless workload for the application at a future time is predicted by the trained machine learning prediction model based on workload observations. A feedback control system is then used to output a new number of replicas based on a current value of the performance metric, the target performance metric and the predicted future serverless workload. Finally, the serverless framework is caused to scale and pre-warm a number of replicas supporting the application to the new number.

    SECURE COMPLIANCE PROTOCOLS
    80.
    发明申请

    公开(公告)号:US20190312855A1

    公开(公告)日:2019-10-10

    申请号:US15947052

    申请日:2018-04-06

    Abstract: In some examples, a secure compliance protocol may include a virtual computing instance (VCI) deployed on a hypervisor and may be provisioned with hardware computing resources. In some examples the VCI may also include a cryptoprocessor to provide cryptoprocessing to securely communicate with a plurality of nodes, and a plurality of agents to generate a plurality of compliance proofs; the VCI may communicate with a server corresponding to a node of the plurality of nodes; and receive a time stamp corresponding to at least one compliance proof based on a metric of a connected device.

Patent Agency Ranking