Abstract:
A method for surveying a data storage subsystem for latent errors before a failing disk drive of the data storage subsystem fails and recovering unreadable data usable to reconstruct data of the failing disk drive. The method includes determining that a disk drive of a plurality of disk drives of the data storage subsystem meets a threshold for being identified as a failing disk drive, and prior to failure of the failing disk drive, surveying at least a portion of the data on the remaining plurality of disk drives to identify data storage areas with latent errors. The identified data storage areas may be reconstructed utilizing, at least in part, data stored on the failing disk drive.
Abstract:
A disk drive system and method capable of dynamically allocating data is provided. The disk drive system may include a RAID subsystem having a pool of storage, for example a page pool of storage that maintains a free list of RAIDs, or a matrix of disk storage blocks that maintain a null list of RAIDs, and a disk manager having at least one disk storage system controller. The RAID subsystem and disk manager dynamically allocate data across the pool of storage and a plurality of disk drives based on RAID-to-disk mapping. The RAID subsystem and disk manager determine whether additional disk drives are required, and a notification is sent if the additional disk drives are required. Dynamic data allocation and data progression allow a user to acquire a disk drive later in time when it is needed. Dynamic data allocation also allows efficient data storage of snapshots/point-in-time copies of virtual volume pool of storage, instant data replay and data instant fusion for data backup, recovery etc., remote data storage, and data progression, etc.
Abstract:
A method for handling input/output (I/O) in a data storage system comprising a RAID subsystem storing data according to a RAID level utilizing a parity scheme, where RAID stripes have been configured across a plurality of data storage devices. The method may include monitoring write requests to the RAID subsystem, identifying write requests destined for the same RAID stripe, and bundling the identified write requests for substantially simultaneous execution at the corresponding RAID stripe. Monitoring write requests to the RAID subsystem may include delaying at least some of the write requests to the RAID subsystem so as to build-up a queue of write requests. In some embodiments, identifying write requests and bundling the identified write requests may include identifying and bundling a number of write requests as required to perform a full stripe write to the corresponding RAID stripe.
Abstract:
A method for surveying a data storage subsystem for latent errors before a failing disk drive of the data storage subsystem fails and recovering unreadable data usable to reconstruct data of the failing disk drive. The method includes determining that a disk drive of a plurality of disk drives of the data storage subsystem meets a threshold for being identified as a failing disk drive, and prior to failure of the failing disk drive, surveying at least a portion of the data on the remaining plurality of disk drives to identify data storage areas with latent errors. The identified data storage areas may be reconstructed utilizing, at least in part, data stored on the failing disk drive.
Abstract:
A method for confirming replicated data at a data site, including utilizing a hash function, computing a first hash value based on first data at a first data site and utilizing the same hash function, computing a second hash value based on second data at a second data site, wherein the first data had previously been replicated from the first data site to the second data site as the second data. The method also includes comparing the first and second hash values to determine whether the second data is a valid replication of the first data. In additional embodiments, the first data may be modified based on seed data prior to computing the first hash value and the second data may be modified based on the same seed data prior to computing the second hash value. The process can be repeated to increase reliability of the results.
Abstract:
A method for replicating data between two or more network connected data storage devices, the method including dynamically determining whether to compress data prior to transmitting across the network based, at least in part, on bandwidth throughput between the network connected data storage devices. If it has been determined to compress the data, the method involves compressing the data and transmitting the compressed data over the network. If it has been determined not to compress the data, the method involves transmitting the data, uncompressed, over the network. Dynamically determining whether to compress data may include comparing bandwidth measurements with a predetermined policy defining when compression should be utilized. In some embodiments, the policy may define that compression should be utilized when an estimated time for compressing the data and transmitting the compressed data is less than an estimated time for transmitting the data uncompressed.
Abstract:
A disk drive system and method capable of dynamically allocating data is provided. The disk drive system may include a RAID subsystem having a pool of storage, for example a page pool of storage that maintains a free list of RAIDs, or a matrix of disk storage blocks that maintain a null list of RAIDs, and a disk manager having at least one disk storage system controller. The RAID subsystem and disk manager dynamically allocate data across the pool of storage and a plurality of disk drives based on RAID-to-disk mapping. The RAID subsystem and disk manager determine whether additional disk drives are required, and a notification is sent if the additional disk drives are required. Dynamic data allocation and data progression allow a user to acquire a disk drive later in time when it is needed. Dynamic data allocation also allows efficient data storage of snapshots/point-in-time copies of virtual volume pool of storage, instant data replay and data instant fusion for data backup, recovery etc., remote data storage, and data progression, etc.
Abstract:
A method for managing input/output (I/O) traffic in an information handling system. The method may include receiving electronic I/O requests from a network-attached server, determining a queue depth limit, monitoring latency of processed electronic I/O requests, and processing received electronic I/O requests. The number of electronic I/O requests permitted to be processed over a period of time may be based on a mathematical combination of the queue depth limit and a latency of processed electronic I/O requests. The determined queue depth limit may be a fractional value.
Abstract:
A process of determining explicitly free data space in computer data storage systems with implicitly allocated data space through the use of information provided by a hosting computer system with knowledge of what space allocated is currently being used at the time of a query, is provided. In one embodiment, a File System (“FS”) is asked to identify clusters no longer in use which is then mapped to physical disks as visible to an Operating System (“OS”). The physical disks are mapped to simulated/virtualized volumes presented by a storage subsystem. By using server information regarding the FS, for those pages that are no longer in use, point in time copy (“PITC”) pages are marked for future PITC and will not be coalesced forward, thereby saving significant storage.
Abstract:
A method for managing input/output (I/O) traffic in an information handling system. The method may include receiving electronic I/O requests from a network-attached server, determining a queue depth limit, monitoring latency of processed electronic I/O requests, and processing received electronic I/O requests. The number of electronic I/O requests permitted to be processed over a period of time may be based on a mathematical combination of the queue depth limit and a latency of processed electronic I/O requests. The determined queue depth limit may be a fractional value.