Abstract:
Provided is a system and method for directing group communication in a system environment that has a plurality of discrete application nodes networked with at least one discrete memory node, establishing a shared memory providing a passive message queue. A code library permits an application node member of a group to assemble a message selected from the group of send, join, leave, or read. The send, join and leave messages permit a first application node to add a message to the queue for all members of a group including the first application node at the time the message was added. The instruction set permitting the addition of the message is executed atomically.
Abstract:
A method of global data placement. The method includes assigning one or more workloads to one or more compute servers such that each workload flows to one compute server, assigning the data chunks that the workloads accesses to one or more storage servers, and determining how the workloads access the data.
Abstract:
Data storage devices of an enterprise system are tested to determine whether the enterprise system is optimally configured. Each data storage device is tested to determine whether it can satisfy a performance requirement for an assigned group of n workloads. A group of n inequalities are generated, and only up to n of the inequalities may be evaluated to determine whether the device satisfies the performance requirement for the assigned group of workloads. The inequalities are based on a phased, correlated model of I/O activity.
Abstract:
An apparatus for and a method of multi-dimensional constraint optimization in a storage system configuration. In accordance with the primary aspect of the present invention, the objective function for a storage system is determined the workload units are selected and their standards are determined, and the storage devices are selected and their characteristics are determined. These selections and determinations are then used by a constraint based solver through constraint integer optimization to generate an assignment plan for the workload units to the storage devices.
Abstract:
An embodiment of a method of estimating storage system reliability begins with a first step of modeling a storage system design in operation under a workload to determine location of retrieval points. The retrieval points provide sources for primary storage recovery for a plurality of failure scenarios. The method continues with a second step of finding a most recent retrieval point relative to a target recovery time that is available for recovery for a particular failure scenario. In a third step, a difference between the target recovery time and a retrieval point creation time for the most recent retrieval point is determined. The difference indicates a data loss time period.
Abstract:
Provided is a method for determining a recovery schedule. The method includes accepting as input a recovery graph. The recovery graph presents one or more strategies for data recovery. In addition, at least one objective is provided and accepted. The recovery graph is formalized as an optimization problem for the provided objective. When formalized as an optimization problem, at least one solution technique is applied to determine at least one recovery schedule.
Abstract:
The present invention provides techniques for assignment and layout of redundant data in data storage system. In one aspect, the data storage system stores a number M of replicas of the data. Nodes that have sufficient resources available to accommodate a requirement of data to be assigned to the system are identified. When the number of nodes is greater than M, the data is assigned to M randomly selected nodes from among those identified. The data to be assigned may include a group of data segments and when the number of nodes is less than M, the group is divided to form a group of data segments having a reduced requirement. Nodes are then identified that have sufficient resources available to accommodate the reduced requirement. In other aspects, techniques are providing for adding a new storage device node to a data storage system having a plurality of existing storage device nodes and for removing data from a storage device node in such a data storage system.
Abstract:
An embodiment of a method of operating a distributed storage system includes reading m data blocks from a distributed cache. The distributed cache comprises memory of a plurality of independent computing devices that include redundancy for the m data blocks. The m data blocks and p parity blocks are stored across m plus p independent computing devices. Each of the m plus p independent computing devices stores a single block selected from the m data blocks and the p parity blocks.
Abstract translation:操作分布式存储系统的方法的实施例包括从分布式高速缓存读取m个数据块。 分布式高速缓存包括包含m个数据块的冗余的多个独立计算设备的存储器。 m个数据块和p个奇偶校验块存储在m + p个独立计算设备上。 m + p个独立计算装置中的每一个存储从m个数据块和p个奇偶校验块中选择的单个块。
Abstract:
A computer storage system includes a controller and a storage device array. The storage device array may include a first sub-array and a fast storage device sub-array. The first sub-array includes one or more first storage devices storing data. The fast storage device sub-array includes one or more fast storage devices storing a copy of the data stored in the first sub-array.
Abstract:
An embodiment of a method of caching data writes data units into a write cache for eventual flushing to storage. The method sets a copy-to-read-cache flag for each particular data unit that is read from the write cache. Upon flushing each data unit to the storage, the method copies the data unit to a read cache if the flag for the data unit is set. Another embodiment of a method of caching data writes data units into a write cache. The method simulates a transfer policy for copying the data units from the write cache to a read cache to determine a performance indicator for the transfer policy. Upon flushing each data unit, the method copies the data unit to the read cache if the performance indicator exceeds a threshold and the transfer policy includes copying the data unit into the read cache.