Abstract:
Implementations described and claimed herein provide a system and methods for prioritizing data in a cache. In one implementation, a priority level, such as critical, high, and normal, is assigned to cached data. The priority level dictates how long the data is cached and consequently, the order in which the data is evicted from the cache memory. Data assigned a priority level of critical will be resident in cache memory unless heavy memory pressure causes the system to reclaim memory and all data assigned a priority state of high or normal has been evicted. High priority data is cached longer than normal priority data, with normal priority data being evicted first. Accordingly, important data assigned a priority level of critical, such as a deduplication table, is kept resident in cache memory at the expense of other data, regardless of the frequency or recency of use of the data.
Abstract:
Aspects of the present disclosure involve a level two persistent cache. In various aspects, a solid-state drive is employed as a level-two cache to expand the capacity of existing caches. In particular, any data that is scheduled to be evicted or otherwise removed from a level-one cache is stored in the level-two cache with corresponding metadata in a manner that is quickly retrievable.
Abstract:
Implementations described and claimed herein provide a coordination of interdependent asynchronous reads. In one implementation, an input/output request for a target data block stored on a block device at a virtual address is received. A highest level indirect block from which the target data block depends in a hierarchical data structure pointing to the virtual address of the target data block is identified. The highest level indirect block is uncached. A context item is recorded to an input/output structure for the highest level indirect block. The context item indicates that an ultimate objective of a read request for the highest level indirect block is to retrieve the target data block. The input/output request is asynchronously reissued for the target data block upon receipt of the read request for the highest level indirect block.
Abstract:
Aspects of the present disclosure disclose systems and methods for managing a level-two persistent cache. In various aspects, a solid-state drive is employed as a level-two cache to expand the capacity of existing caches. In particular, any data that is scheduled to be evicted or otherwise removed from a level-one cache is stored in the level-two cache with corresponding metadata in a manner that is quickly retrievable. The data contained within the level-two cache is managing using a cache list that manages and/or maintains data chunk entries added to the level-two cache based on a temporal access of the data chunk.
Abstract:
Implementations described and claimed herein provide systems and methods for allocating and managing resources for a deduplication table. In one implementation, an upper limit to an amount of memory allocated to a deduplication table is established. The deduplication table has one or more checksum entries, and each checksum entry is associates a checksum with unique data. A new checksum entry corresponding to new unique data is prevented from being added to the deduplication table where adding the new checksum entry will cause the deduplication table to exceed a size limit. The new unique data has a checksum that is different from the checksums in the one or more checksum entries in the deduplication table.
Abstract:
Aspects of the present disclosure disclose systems and methods for managing a level-two persistent cache. In various aspects, a solid-state drive is employed as a level-two cache to expand the capacity of existing caches. Any data stored in the level-two cache may be stored in a particular version or format of data known as “raw” data, in contrast to storing the data in a “cooked” version, as is typically stored in a level-one cache.
Abstract:
Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes.
Abstract:
Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes. For example, by transforming a ZFS file system disk-based storage into ZFS cloud-based storage, the ZFS file system gains the elastic nature of cloud storage.
Abstract:
Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes. For example, by transforming a ZFS file system disk-based storage into ZFS cloud-based storage, the ZFS file system gains the elastic nature of cloud storage.
Abstract:
Techniques described herein relate to systems and methods of data storage, and more particularly to providing layering of file system functionality on an object interface. In certain embodiments, file system functionality may be layered on cloud object interfaces to provide cloud-based storage while allowing for functionality expected from a legacy applications. For instance, POSIX interfaces and semantics may be layered on cloud-based storage, while providing access to data in a manner consistent with file-based access with data organization in name hierarchies. Various embodiments also may provide for memory mapping of data so that memory map changes are reflected in persistent storage while ensuring consistency between memory map changes and writes. For example, by transforming a ZFS file system disk-based storage into ZFS cloud-based storage, the ZFS file system gains the elastic nature of cloud storage.