FLEXIBLE COMPUTING
    21.
    发明公开
    FLEXIBLE COMPUTING 审中-公开

    公开(公告)号:US20240078135A1

    公开(公告)日:2024-03-07

    申请号:US18140086

    申请日:2023-04-27

    Applicant: Snowflake Inc.

    Abstract: Embodiments of the present disclosure may provide dynamic and fair assignment techniques for allocating resources on a demand basis. Assignment control may be separated into at least two components: a local component and a global component. Each component may have an active dialog with each other; the dialog may include two aspects: 1) a demand for computing resources, and 2) a total allowed number of computing resources. The global component may allocate resources from a pool of resources to different local components, and the local components in turn may assign their allocated resources to local competing requests. The allocation may also be throttled or limited at various levels.

    FLEXIBLE COMPUTING
    24.
    发明申请

    公开(公告)号:US20210357263A1

    公开(公告)日:2021-11-18

    申请号:US17342713

    申请日:2021-06-09

    Applicant: Snowflake Inc

    Abstract: Embodiments of the present disclosure may provide dynamic and fair assignment techniques for allocating resources on a demand basis. Assignment control may be separated into at least two components: a local component and a global component. Each component may have an active dialog with each other; the dialog may include two aspects: 1) a demand for computing resources, and 2) a total allowed number of computing resources. The global component may allocate resources from a pool of resources to different local components, and the local components in turn may assign their allocated resources to local competing requests. The allocation may also be throttled or limited at various levels.

    Scalable query processing
    29.
    发明授权

    公开(公告)号:US12216656B2

    公开(公告)日:2025-02-04

    申请号:US18477808

    申请日:2023-09-29

    Applicant: Snowflake Inc.

    Abstract: Embodiments of the present disclosure may provide a dynamic query execution model. This query execution model may provide acceleration by scaling out parallel parts of a query (also referred to as a fragment) to additional computing resources, for example computing resources leased from a pool of computing resources. Execution of the parts of the query may be coordinated by a parent query coordinator, where the query originated, and a fragment query coordinator.

Patent Agency Ranking