Abstract:
Systems and techniques for cache management are disclosed that provide improved cache performance by prioritizing particular storage stripes for cache flush operations. The systems and techniques may also leverage features of the storage devices to provide atomicity without the overhead of inter-controller mirroring. In some embodiments, the systems and techniques include a storage controller that stores data in a cache. The data is associated with one or more sectors of a storage stripe that is defined over plurality of storage devices. The storage controller identifies a locality of dirty sectors of the one or more sectors, classifies the storage stripe into a category based on the locality, provides a category ordering of the category relative to at least one other category, and flushes the storage stripe from the cache to the plurality of storage devices according to the category ordering.
Abstract:
A method for migration of operations between CPU cores, the method includes: processing, by a source core, one or more tasks and one or more interrupt service routines; accessing a mapping corresponding to a task of the one or more tasks and an interrupt service routine of the one or more interrupt service routines; identifying, based on the mapping, a target core that corresponds to the task and the interrupt service routine; blocking the task from being processed by the source core in response to identifying the target core; in response to identifying the target core, disabling an interrupt corresponding to the interrupt service routine; in response to identifying the target core, assigning the task and the interrupt to the target core; after assigning the interrupt to the target core, enabling the interrupt; and after assigning the task to the target core, processing the task by the target core.
Abstract:
Methods and systems for managing caching mechanisms in storage systems are provided where a global cache management function manages multiple independent cache pools and a global cache pool. As an example, the method includes: splitting a cache storage into a plurality of independently operating cache pools, each cache pool comprising storage space for storing a plurality of cache blocks for storing data related to an input/output ("I/O") request and metadata associated with each cache pool; receiving the I/O request for writing a data; operating a hash function on the I/O request to assign the I/O request to one of the plurality of cache pools; and writing the data of the I/O request to one or more of the cache blocks associated with the assigned cache pool. In an aspect, this allows efficient I/O processing across multiple processors simultaneously.
Abstract:
A method for migration of operations between CPU cores, the method includes: processing, by a source core, one or more tasks and one or more interrupt service routines; accessing a mapping corresponding to a task of the one or more tasks and an interrupt service routine of the one or more interrupt service routines; identifying, based on the mapping, a target core that corresponds to the task and the interrupt service routine; blocking the task from being processed by the source core in response to identifying the target core; in response to identifying the target core, disabling an interrupt corresponding to the interrupt service routine; in response to identifying the target core, assigning the task and the interrupt to the target core; after assigning the interrupt to the target core, enabling the interrupt; and after assigning the task to the target core, processing the task by the target core.
Abstract:
Systems, devices, and methods are provided for sharing host resources in a multiprocessor storage array, the multiprocessor storage array running controller firmware designed for a uniprocessor environment. In some aspects, one or more virtual machines can be initialized by a virtual machine manager or a hypervisor in the storage array system. Each of the one or more virtual machines implement an instance of the controller firmware designed for a uniprocessor environment. The virtual machine manager or hypervisor can assign processing devices within the storage array system to each of the one or more virtual machines. The virtual machine manager or hypervisor can also assign virtual functions to each of the virtual machines. The virtual machines can concurrently access one or more I/O devices, such as physical storage devices, by writing to and reading from the respective virtual functions.
Abstract:
Systems, devices, and methods are provided for sharing host resources in a multiprocessor storage array, the multiprocessor storage array running controller firmware designed for a uniprocessor environment. In some aspects, one or more virtual machines can be initialized by a virtual machine manager or a hypervisor in the storage array system. Each of the one or more virtual machines implement an instance of the controller firmware designed for a uniprocessor environment. The virtual machine manager or hypervisor can assign processing devices within the storage array system to each of the one or more virtual machines. The virtual machine manager or hypervisor can also assign virtual functions to each of the virtual machines. The virtual machines can concurrently access one or more I/O devices, such as physical storage devices, by writing to and reading from the respective virtual functions.