Abstract:
A technique for allocating a write cache allowed data size of a write cache from a plurality of write caches to each of a plurality of storage volumes, calculating a write cache utilization of the write cache for each of the respective storage volumes, wherein the write cache utilization is based on a write cache dirty data size of the write cache allocated to the respective storage volume divided by the write cache allowed data size of the write cache allocated to the respective storage volume, and adjusting the write cache allowed data size of the write cache allocated to storage volumes based on the write cache utilization of the write cache of the storage volumes.
Abstract:
Embodiments herein relate to selecting an accelerated path based on a number of write requests and a sequential trend. One of an accelerated path and a cache path is selected between a host and a storage device based on at least one of a number of write requests and a sequential trend. The cache path connects the host to the storage device via a cache. The number of write requests is based on a total number of random and sequential write requests from a set of outstanding requests from the host to the storage device. The sequential trend is based on a percentage of sequential read and sequential write requests from the set of outstanding requests.
Abstract:
A method that includes identifying an inaccessible portion of a first disk drive. The method also includes regenerating data corresponding to the inaccessible portion of the first disk drive and storing the regenerated data to a second disk drive. The method also includes copying data from an accessible portion of the first disk drive to the second disk drive.
Abstract:
A method that includes identifying an inaccessible portion of a first disk drive. The method also includes regenerating data corresponding to the inaccessible portion of the first disk drive and storing the regenerated data to a second disk drive. The method also includes copying data from an accessible portion of the first disk drive to the second disk drive.
Abstract:
A storage controller to receive from a host multiple read requests to read sets of data blocks from a data storage device. The storage controller to determine whether the read requests include non-continuous addresses associated with a set of non-requested data blocks between sets of requested data blocks and a gap of a number of non-requested data blocks is less than a pre-defined threshold. If the read requests have non-continuous addresses and a gap of a number of non-requested data blocks is less than the pre-defined threshold, then generate a single read request to retrieve from a storage device the non-requested and requested data blocks, and directly write the retrieved requested data blocks to a host buffer, and write the retrieved non-requested data blocks to a cache memory on a storage controller.
Abstract:
Embodiments herein relate to sending a request to a storage device based on a moving average. A threshold is determined based on a storage device type and a bandwidth of a cache bus connecting a cache to a controller. The moving average of throughput is measured between the storage device and a host. The request of the host to access the storage device is sent directly to the storage device, if the moving average is equal to the threshold.
Abstract:
A storage management module configured to identify storage volumes to be rebuilt and remaining storage volumes that are not to be rebuilt, calculate rebuild priority information for the identified storage volumes to be rebuilt based on storage information of the identified storage volumes, and generate rebuild requests to rebuild the identified storage volumes to be rebuilt and process host requests directed to the remaining and to be rebuilt storage volumes based on the rebuild priority information and amount of host requests, wherein with relative high amount of host requests, generate relative less rebuild requests but not less than a minimum rebuild traffic percentage or more than a maximum rebuild traffic percentage.
Abstract:
Disclosed herein are a system, non-transitory computer readable medium, and method to reduce input and output transactions. It is determined whether a first set of dirty data, a second set of dirty data, and a number of data blocks therebetween can be flushed with one transaction.
Abstract:
A technique for cache node processing that includes generating a cache node in response to a request to write data to storage devices. If logical block address (LBA) of the generated cache node is adjacent to LBA of cache nodes of a cache node list, then check if there are cache nodes that are sequential up to a predefined boundary. If there are cache nodes that are sequential up to the predefined boundary, then flush the data of the sequential cache nodes together as a group up to the predefined boundary.
Abstract:
A technique for cache node processing that includes generating a cache node in response to a request to write data to storage devices. If logical block address (LBA) of the generated cache node is adjacent to LBA of cache nodes of a cache node list, then check if there are cache nodes that are sequential up to a predefined boundary. If there are cache nodes that are sequential up to the predefined boundary, then flush the data of the sequential cache nodes together as a group up to the predefined boundary.