Abstract:
One or more techniques and/or computing devices are provided for utilizing snapshots for data integrity validation and/or faster application recovery. For example, a first storage controller, hosting first storage, has a synchronous replication relationship with a second storage controller hosting second storage. A snapshot replication policy rule is defined to specify that a replication label is to be used for snapshot create requests, targeting the first storage, that are to be replicated to the second storage. A snapshot creation policy is created to issue snapshot create requests comprising the replication label. Thus a snapshot of the first storage and a replication snapshot of the second storage are created based upon a snapshot create request comprising the replication label. The snapshot and the replication snapshot may be compared for data integrity validation (e.g., determine whether the snapshots comprise the same data) and/or quickly recovering an application after a disaster.
Abstract:
One or more techniques and/or computing devices are provided for replicating virtual machine disk clones. For example, a first storage controller, hosting first storage, may have a synchronous replication relationship with a second storage controller hosting second storage. A virtual machine, within the first storage, may be specified as having synchronous replication protection. Accordingly, virtual machine disk clones of a virtual machine disk of the virtual machine may be replicated from the first storage to the second storage. For example, virtual machine disk clones may be synchronous replicated, replicated by a resync process invoked by a hypervisor agent, and/or stored and replicated from a clone backup directory.
Abstract:
One or more techniques and/or computing devices are provided for secure data replication. For example, a first storage controller (116) may host first storage (128) within which storage resources (e.g., files, logical unit numbers (LUNs), volumes, etc.) are stored. The first storage controller (116) may establish an access policy with a second storage controller (118) to which data is to be replicated from the first storage (128). The access policy may define an authentication mechanism for the first storage controller (116) to authenticate the second storage controller (118), an authorization mechanism specifying a type of access that the second storage controller (118) has for a storage resource, and an access control mechanism specifying how the second storage controller's access to data of the storage resource is to be controlled. In this way, data replication requests may be authenticated and authorized so that data may be provided, according to the access control mechanism, in a secure manner.
Abstract:
A storage area network (SAN)-attached storage system architecture is disclosed. The storage system provides strongly consistent distributed storage communication protocol semantics, such as SCSI target semantics. The system includes a mechanism for presenting a single distributed logical unit, comprising one or more logical sub-units, as a single logical unit of storage to a host system by associating each of the logical sub-units that make up the single distributed logical unit with a single host visible identifier that corresponds to the single distributed logical unit. The system further includes mechanisms to maintain consistent context information for each of the logical sub-units such that the logical sub-units are not visible to a host system as separate entities from the single distributed logical unit.
Abstract:
One or more techniques and/or computing devices are provided for managing an arbitrary set of storage items using a granset. For example, a storage controller may host a plurality of storage items and/or logical unit numbers (LUNs). A subset of the storage items are grouped into a consistency group. A granset is created for tracking, managing, and/or providing access to the storage items within the consistency group. For example, the granset comprises application programming interfaces (APIs) and/or properties used to provide certain levels of access to the storage items (e.g., read access, write access, no access), redirect operations to access either data of an active file system or to a snapshot, fence certain operations (e.g., rename and delete operations), and/or other properties that apply to each storage item within the consistency group. Thus, the granset provides a persistent on-disk layout used to manage an arbitrary set of storage items.
Abstract:
One or more techniques and/or computing devices are provided for granular replication for data protection. For example, a first storage controller may host a first volume. A consistency group, comprising a subset of files, logical unit numbers, and/or other data of the first volume, is defined through a consistency group configuration. A baseline transfer, using a baseline snapshot of the first volume, is used to create a replicated consistency group within a second volume hosted by a second storage controller. In this way, an arbitrary level of granularity is used to synchronize/replicate a subset of the first volume to the second volume. If a synchronous replication relationship is specified, then one or more incremental transfer are performed and a synchronous replication engine is implemented. If an asynchronous replication relationship is specified, then snapshots are used to identify delta data of the consistency group for updating the replication consistency group.
Abstract:
A storage area network (SAN)-attached storage system architecture is disclosed. The storage system provides strongly consistent distributed storage communication protocol semantics, such as SCSI target semantics. The system includes a mechanism for presenting a single distributed logical unit, comprising one or more logical sub-units, as a single logical unit of storage to a host system by associating each of the logical sub-units that make up the single distributed logical unit with a single host visible identifier that corresponds to the single distributed logical unit. The system further includes mechanisms to maintain consistent context information for each of the logical sub-units such that the logical sub-units are not visible to a host system as separate entities from the single distributed logical unit.