QUERY REFRESH USING MULTIPLE PROCESSING PIPELINES

    公开(公告)号:US20250117382A1

    公开(公告)日:2025-04-10

    申请号:US18988025

    申请日:2024-12-19

    Applicant: Snowflake Inc.

    Abstract: A system includes at least one hardware processor and at least one memory storing instructions that cause the at least one hardware processor to perform operations. The operations include generating a log of changes posted to a plurality of intermediate materialized tables (MTs) during execution of a query in a network-based database system. The query is associated with a source MT that the intermediate MTs depend on. The operations include rendering the log of changes into a dependency graph. The operations include configuring a plurality of processing pipelines based on the dependency graph. The operations include performing refreshes on one or more of the plurality of intermediate MTs in at least one of the plurality of processing pipelines to complete the refresh operation. The refreshes are performed responsive to detecting an instruction for a refresh operation on the source MT.

    Query execution using materialized tables

    公开(公告)号:US12189616B2

    公开(公告)日:2025-01-07

    申请号:US18353317

    申请日:2023-07-17

    Applicant: Snowflake Inc.

    Abstract: A method includes retrieving a plurality of materialized tables (MTs). Each of the plurality of MTs includes a lag duration and refers to a corresponding base table of a plurality of base tables. The lag duration indicates a maximum time period that a result of a prior refresh of a query on the corresponding base table can lag behind a current time instance. A plurality of time instances for the MT is determined based on the lag duration and a number of prior refreshes of the corresponding base table. A plurality of aligned time instances for the plurality of MTs is determined based on the plurality of time instances for each of the plurality of MTs. Refresh operations are scheduled for the plurality of MTs at one or more of the plurality of aligned time instances that are within the maximum time period.

    SHARING MATERIALIZED VIEWS IN MULTIPLE TENANT DATABASE SYSTEMS

    公开(公告)号:US20230418818A1

    公开(公告)日:2023-12-28

    申请号:US18463904

    申请日:2023-09-08

    Applicant: Snowflake Inc.

    CPC classification number: G06F16/24539

    Abstract: Systems, methods, and devices for sharing materialized views in multiple tenant database systems. A method includes defining a materialized view over a source table that is associated with a first account of a multiple tenant database. The method includes defining cross-account access rights to the materialized view to a second account such that that second account can read the materialized view without copying the materialized view. The method includes modifying the source table for the materialized view. The method includes identifying whether the materialized view is stale with respect to the source table by merging the materialized view and the source table.

    QUERY EXECUTION USING MATERIALIZED TABLES
    5.
    发明公开

    公开(公告)号:US20230401199A1

    公开(公告)日:2023-12-14

    申请号:US18353317

    申请日:2023-07-17

    Applicant: Snowflake Inc.

    CPC classification number: G06F16/2393 G06F11/3419

    Abstract: A method includes retrieving a plurality of materialized tables (MTs). Each of the plurality of MTs includes a lag duration and refers to a corresponding base table of a plurality of base tables. The lag duration indicates a maximum time period that a result of a prior refresh of a query on the corresponding base table can lag behind a current time instance. A plurality of time instances for the MT is determined based on the lag duration and a number of prior refreshes of the corresponding base table. A plurality of aligned time instances for the plurality of MTs is determined based on the plurality of time instances for each of the plurality of MTs. Refresh operations are scheduled for the plurality of MTs at one or more of the plurality of aligned time instances that are within the maximum time period.

    SCALABLE QUERY PROCESSING
    7.
    发明申请

    公开(公告)号:US20220414097A1

    公开(公告)日:2022-12-29

    申请号:US17823572

    申请日:2022-08-31

    Applicant: Snowflake Inc.

    Abstract: Embodiments of the present disclosure may provide a dynamic query execution model. This query execution model may provide acceleration by scaling out parallel parts of a query (also referred to as a fragment) to additional computing resources, for example computing resources leased from a pool of computing resources. Execution of the parts of the query may be coordinated by a parent query coordinator, where the query originated, and a fragment query coordinator.

    Resource provisioning in database systems

    公开(公告)号:US11514064B2

    公开(公告)日:2022-11-29

    申请号:US17663248

    申请日:2022-05-13

    Applicant: Snowflake Inc.

    Abstract: Resource provisioning systems and methods are described. In an embodiment, a system includes a plurality of shared storage devices collectively storing database data, an execution platform, and a compute service manager. The compute service manager is configured to determine a task to be executed in response to a trigger event and determine a query plan for executing the task, wherein the query plan comprises a plurality of discrete subtasks. The compute service manager is further configured to assign the plurality of discrete subtasks to one or more nodes of a plurality of nodes of the execution platform, determine whether execution of the task is complete, and in response to determining the execution of the task is complete, store a record in the plurality of shared storage devices indicating the task was completed.

    RESOURCE PROVISIONING IN DATABASE SYSTEMS

    公开(公告)号:US20220269676A1

    公开(公告)日:2022-08-25

    申请号:US17663248

    申请日:2022-05-13

    Applicant: Snowflake Inc.

    Abstract: Resource provisioning systems and methods are described. In an embodiment, a system includes a plurality of shared storage devices collectively storing database data, an execution platform, and a compute service manager. The compute service manager is configured to determine a task to be executed in response to a trigger event and determine a query plan for executing the task, wherein the query plan comprises a plurality of discrete subtasks. The compute service manager is further configured to assign the plurality of discrete subtasks to one or more nodes of a plurality of nodes of the execution platform, determine whether execution of the task is complete, and in response to determining the execution of the task is complete, store a record in the plurality of shared storage devices indicating the task was completed.

    SCALABLE QUERY PROCESSING
    10.
    发明申请

    公开(公告)号:US20220222255A1

    公开(公告)日:2022-07-14

    申请号:US17657257

    申请日:2022-03-30

    Applicant: Snowflake Inc.

    Abstract: Embodiments of the present disclosure may provide a dynamic query execution model. This query execution model may provide acceleration by scaling out parallel parts of a query (also referred to as a fragment) to additional computing resources, for example computing resources leased from a pool of computing resources. Execution of the parts of the query may be coordinated by a parent query coordinator, where the query originated, and a fragment query coordinator.

Patent Agency Ranking