Abstract:
A technique places content, such as data, of one or more data containers on volumes of a striped volume set (SVS). The placement of data across the volumes of the SVS allows specification of a deterministic pattern of fixed length. That is, the pattern determines a placement of data of a data container that is striped among the volumes of the SVS. The placement pattern is such that the stripes are distributed exactly or nearly equally among the volumes and that, within any local span of a small multiple of the number of volumes, the stripes are distributed nearly equally among the volumes. The placement pattern is also substantially similar for a plurality of SVSs having different numbers of volumes.
Abstract:
A dynamic parity distribution system and technique distributes parity across disks of an array. The dynamic parity distribution system includes a storage operating system that integrates a file system with a RAID system. In response to a request to store (write) data on the array, the file system determines which disks contain free blocks in a next allocated stripe of the array. There may be multiple blocks within the stripe that do not contain file system data (i.e., unallocated data blocks) and that could potentially store parity. One or more of those unallocated data blocks can be assigned to store parity, arbitrarily. According to the dynamic parity distribution technique, the file system determines which blocks hold parity each time there is a write request to the stripe. The technique alternately allows the RAID system to assign a block to contain parity when each stripe is written.
Abstract:
Multiple domains are created for processes of a storage server. The processes are capable of execution on a plurality of processors in the storage server. The domains include a first domain, which includes multiple threads that can execute processes in the first domain in parallel, to service data access requests. A data set managed by the storage server is logically divided into multiple subsets, and each of the subsets is assigned to exactly one of the threads in the first domain, for processing of data access requests directed to the data set.
Abstract:
A system and method provides continuous data protection using checkpoints in a write anywhere file system. During a consistency point of a write anywhere file system, freed blocks are identified and are appended to a delete log for retention. A consistency point log is updated with a new entry associated with the consistency point. If the file system needs to retrieve its state at a particular point in time, the stored blocks of the delete log may be recovered.
Abstract:
A technique is disclosed for restoring data of sparse volumes, where one or more block pointers within the file system structure are marked as ABSENT, and fetching the appropriate data from an alternate location on demand. Client data access requests to the local storage system initiate a restoration of the data from a backing store as required. A demand generator can also be used to restore the data as a background process by walking through the sparse volume and restoring the data of absent blocks. A pump module is also disclosed to regulate the access of the demand generator. Once all the data has been restored, the volume contains all data locally, and is no longer a sparse volume.
Abstract:
A semi-static distribution technique distributes parity across disks of an array. According to the technique, parity is distributed (assigned) across the disks of the array in a manner that maintains a fixed pattern of parity blocks among the stripes of the disks. When one or more disks are added to the array, the semi-static technique redistributes parity in a way that does not require recalculation of parity or moving of any data blocks. Notably, the parity information is not actually moved; the technique merely involves a change in the assignment (or reservation) for some of the parity blocks of each pre-existing disk to the newly added disk.
Abstract:
A file system layout apportions an underlying physical volume into one or more virtual volumes (vvols) of a storage system. The underlying physical volume is an aggregate comprising one or more groups of disks, such as RAID groups, of the storage system. The aggregate has its own physical volume block number (pvbn) space and maintains metadata, such as block allocation structures, within that pvbn space. Each vvol has its own virtual volume block number (vvbn) space and maintains meta-data, such as block allocation structures, within that vvbn space. Notably, the block allocation structures of a vvol are sized to the vvol, and not to the underlying aggregate, to thereby allow operations that manage data served by the storage system (e.g., snap-shot operations) to efficiently work over the vvols. The file system layout extends the file system layout of a conventional write anywhere file layout system implementation, yet maintains performance properties of the conventional implementation.
Abstract:
A network caching system has a multi-protocol caching filer coupled to an origin server to provide storage virtualization of data served by the filer in response to data access requests issued by multi-protocol clients over a computer network. The multi-protocol caching filer includes a file system configured to manage a sparse volume that “virtualizes” a storage space of the data to thereby provide a cache function that enables access to data by the multi-protocol clients. To that end, the caching filer further includes a multi-protocol engine configured to translate the multi-protocol client data access requests into generic file system primitive operations executable by both the caching filer and the origin server.
Abstract:
A network caching system has a multi-protocol caching filer coupled to an origin server to provide storage virtualization of data served by the filer in response to data access requests issued by multi-protocol clients over a computer network. The multi-protocol caching filer includes a file system configured to manage a sparse volume that “virtualizes” a storage space of the data to thereby provide a cache function that enables access to data by the multi-protocol clients. To that end, the caching filer further includes a multi-protocol engine configured to translate the multi-protocol client data access requests into generic file system primitive operations executable by both the caching filer and the origin server.