Abstract:
A storage controller includes a co-access pattern mining unit configured to detect co-access patterns of data co-accessed during a particular time duration and to generate co-access groups including a plurality of pieces of data complying with the co-access patterns. The storage controller further include a co-access group matching unit configured to select a co-access group matched with read-requested data, among the generated co-access groups, and a data placement unit configured to store the data included in the selected co-access group in a pre-fetch buffer.
Abstract:
Provided are an interrupt on/off management apparatus and method for a multi-core processor having a plurality of central processing unit (CPU) cores. The interrupt on/off management apparatus manages the multi-core processor such that at least one of two or more CPU cores included in a target CPU set can execute an urgent interrupt. For example, the interrupt on/off management apparatus controls the movement of each CPU core from a critical section to a non-critical section such that at least one of the CPU cores is located in the non-critical section. The critical section may include an interrupt-disabled section or a kernel non-preemptible section, and the non-critical section may include an interrupt-enabled section or include both of the interrupt-enabled section and a kernel preemptible section.
Abstract:
A workload-aware distributed data processing apparatus and method for processing large data based on hardware acceleration are provided. The data processing apparatus includes a memory buffer including partitions. The data processing apparatus further includes a partition unit configured to distribute a mapping result to the partitions based on a partition proportion scheme. The data processing apparatus further includes a reduce node configured to receive content of a corresponding one of the partitions, and perform a reduction operation on the content to generate a reduce result.
Abstract:
A distributed data processing apparatus and method through hardware acceleration are provided. The data processing apparatus includes a mapping node including mapping units configured to process input data in parallel to generate and output mapping results. The data processing apparatus further includes a shuffle node including shuffle units and a memory buffer, the shuffle units configured to process the mapping results output from the mapping units in parallel to generate and output shuffle results, and the shuffle node configured to write the shuffle results output from the shuffle units in the memory buffer. The data processing apparatus further includes a merge node including merge units configured to merge the shuffle results written in the memory buffer to generate merging results.
Abstract:
A distributed storage management apparatus includes a monitoring unit configured to monitor a request pattern of each storage node of a plurality of storage nodes configured to distributively store data and at least one replica of the data; a group setting unit configured to receive a request and classify the plurality of storage nodes into a safe group and an unsafe group based on the monitored request pattern of each storage node; and a request transfer unit configured to transfer the received request to the safe group.
Abstract:
A storage controller that improves performance of a storage device by reducing the number of data I/O operations. The storage controller, as part of a storage device and a storage system, and in a method of operating the storage controller, includes a host interface receiving data requested for storage from a host and lifetime information indicating a change period of the data, and a data placement manager determining a storage position of the data in a flash memory based on the lifetime information of the data.
Abstract:
A stripe reconstituting method in a storage system, a garbage collection method employing the stripe reconstituting method, and the storage system performing the stripe reconstituting method are provided. The stripe reconstituting method includes the operations of selecting a target stripe in which an imbalance between valid page ratios of memory blocks included in the target stripe exceeds an initially-set threshold value, from among stripes produced in a log-structured storage system; and reconstituting a stripe by regrouping the memory blocks included in the target stripe such that the imbalance between the valid page ratios of the memory blocks included in the target stripe is reduced.
Abstract:
A distributed storage management apparatus includes a monitoring unit configured to monitor a request pattern of each storage node of a plurality of storage nodes configured to distributively store data and at least one replica of the data; a group setting unit configured to receive a request and classify the plurality of storage nodes into a safe group and an unsafe group based on the monitored request pattern of each storage node; and a request transfer unit configured to transfer the received request to the safe group.
Abstract:
Provided is a memory management method, and an apparatus to perform the method, which achieves a shortened user waiting time in consideration of system performance. The method includes acquiring a deallocation unit used to deallocate an allocated memory area according to at least one attribute, and deallocating the allocated memory area using the deallocation unit.
Abstract:
A data mirroring control apparatus includes a command distributing unit configured to transmit a first write command to a plurality of mirroring storage devices, the first write command including an instruction for data requested by a host to be written; and a memory lock setting unit configured to set a memory lock on the data requested by the host to be written among data stored in a host memory and configured to release the memory lock on the data after the data with the memory lock is written to the plurality of mirroring storage devices.