Abstract:
Provided is a memory management method, and an apparatus to perform the method, which achieves a shortened user waiting time in consideration of system performance. The method includes acquiring a deallocation unit used to deallocate an allocated memory area according to at least one attribute, and deallocating the allocated memory area using the deallocation unit.
Abstract:
A data mirroring control apparatus includes a command distributing unit configured to transmit a first write command to a plurality of mirroring storage devices, the first write command including an instruction for data requested by a host to be written; and a memory lock setting unit configured to set a memory lock on the data requested by the host to be written among data stored in a host memory and configured to release the memory lock on the data after the data with the memory lock is written to the plurality of mirroring storage devices.
Abstract:
Provided are an interrupt on/off management apparatus and method for a multi-core processor having a plurality of central processing unit (CPU) cores. The interrupt on/off management apparatus manages the multi-core processor such that at least one of two or more CPU cores included in a target CPU set can execute an urgent interrupt. For example, the interrupt on/off management apparatus controls the movement of each CPU core from a critical section to a non-critical section such that at least one of the CPU cores is located in the non-critical section. The critical section may include an interrupt-disabled section or a kernel non-preemptible section, and the non-critical section may include an interrupt-enabled section or include both of the interrupt-enabled section and a kernel preemptible section.
Abstract:
A distributed data processing apparatus and method through hardware acceleration are provided. The data processing apparatus includes a mapping node including mapping units configured to process input data in parallel to generate and output mapping results. The data processing apparatus further includes a shuffle node including shuffle units and a memory buffer, the shuffle units configured to process the mapping results output from the mapping units in parallel to generate and output shuffle results, and the shuffle node configured to write the shuffle results output from the shuffle units in the memory buffer. The data processing apparatus further includes a merge node including merge units configured to merge the shuffle results written in the memory buffer to generate merging results.
Abstract:
A virtualization apparatus is provided. The virtualization apparatus includes a plurality of virtual machines (VMs), a process scheduler configured to schedule processes to be executed on the respective virtual machines, a virtual machine monitor (VMM) configured to provide each of the virtual machine with a virtualized execution environment, a virtual machine scheduler configured to schedule the virtual machines to run in the virtual machine monitor, and a synchronization unit configured to synchronize a process schedule time which is scheduled by the process scheduler and a virtual machine schedule time which is scheduled by the virtual machine scheduler, or to change the virtual machine schedule time in consideration of the process schedule time.