Scalable query processing
    63.
    发明授权

    公开(公告)号:US12216656B2

    公开(公告)日:2025-02-04

    申请号:US18477808

    申请日:2023-09-29

    Applicant: Snowflake Inc.

    Abstract: Embodiments of the present disclosure may provide a dynamic query execution model. This query execution model may provide acceleration by scaling out parallel parts of a query (also referred to as a fragment) to additional computing resources, for example computing resources leased from a pool of computing resources. Execution of the parts of the query may be coordinated by a parent query coordinator, where the query originated, and a fragment query coordinator.

    FLEXIBLE COMPUTING
    64.
    发明申请

    公开(公告)号:US20250021390A1

    公开(公告)日:2025-01-16

    申请号:US18822732

    申请日:2024-09-03

    Applicant: Snowflake Inc.

    Abstract: Embodiments of the present disclosure may provide dynamic and fair assignment techniques for allocating resources on a demand basis. Assignment control may be separated into at least two components: a local component and a global component. Each component may have an active dialog with each other; the dialog may include two aspects: 1) a demand for computing resources, and 2) a total allowed number of computing resources. The global component may allocate resources from a pool of resources to different local components, and the local components in turn may assign their allocated resources to local competing requests. The allocation may also be throttled or limited at various levels.

    MATERIALIZED TABLE REFRESH USING MULTIPLE PROCESSING PIPELINES

    公开(公告)号:US20230409574A1

    公开(公告)日:2023-12-21

    申请号:US18362898

    申请日:2023-07-31

    Applicant: Snowflake Inc.

    CPC classification number: G06F16/24539 G06F7/14 G06F16/24542

    Abstract: A system for a materialized table (MT) refresh using multiple processing pipelines includes at least one hardware processor coupled to memory storing instructions. The instructions cause the at least one hardware processor to perform operations including determining dependencies among a plurality of intermediate MTs generated from a source MT. The source MT uses a table definition with a query on one or more base tables and a lag duration value. A graph snapshot of dependencies among the plurality of intermediate MTs is generated. Processing pipelines are configured. Each of the processing pipelines corresponds to a subset of the plurality of intermediate MTs indicated by the graph snapshot. Responsive to detecting an instruction for a refresh operation on the source MT, refreshes on corresponding intermediate MTs of the plurality of intermediate MTs in each processing pipeline of the processing pipelines are performed to complete the refresh operation on the source MT.

    Execution and consistency model for materialized tables

    公开(公告)号:US11755568B1

    公开(公告)日:2023-09-12

    申请号:US17931705

    申请日:2022-09-13

    Applicant: Snowflake Inc.

    CPC classification number: G06F16/2393 G06F11/3419

    Abstract: Provided herein are systems and methods for a database object (e.g., materialized table) configuration including scheduling refreshes of the materialized table. For example, a method includes determining a dependency graph for a first MT. The dependency graph comprises a second MT from which the first MT depends. The first MT includes a query on one or more base tables and a lag duration value. The lag duration value indicates a maximum time period that a result of a prior refresh of the query can lag behind a current time instance. A tick period is selected for a set of ticks based on the lag duration value. The set of ticks corresponds to a set of aligned time instances. Refresh operations are scheduled for the first and second MTs at corresponding time instances from the set of aligned time instances. The corresponding time instances are separated by the tick period.

    FLEXIBLE COMPUTING
    70.
    发明申请

    公开(公告)号:US20230079405A1

    公开(公告)日:2023-03-16

    申请号:US18050608

    申请日:2022-10-28

    Applicant: Snowflake Inc.

    Abstract: Embodiments of the present disclosure may provide dynamic and fair assignment techniques for allocating resources on a demand basis. Assignment control may be separated into at least two components: a local component and a global component. Each component may have an active dialog with each other; the dialog may include two aspects: 1) a demand for computing resources, and 2) a total allowed number of computing resources. The global component may allocate resources from a pool of resources to different local components, and the local components in turn may assign their allocated resources to local competing requests. The allocation may also be throttled or limited at various levels.

Patent Agency Ranking