Abstract:
PURPOSE: An infrastructure sharing support device between cloud systems and a method thereof are provided to efficiently utilize the infrastructure resources which are not used in cloud, thereby increasing the value of the cloud infrastructure. CONSTITUTION: A cloud infrastructure information management device(112) generates and manages cloud resource information of cloud(130) by collecting a server resource information which is transmitted from an information collection agent(121) of the servers included in the cloud. The cloud infrastructure information management device receives and stores the cloud infrastructure integration information including cloud resource information of the cloud. A cloud infrastructure sharing interface(111) transmits the cloud resource information to a cloud infrastructure integration information management device and transmits the cloud infrastructure integration information to the cloud infrastructure information management device. The cloud infrastructure integration information is received from the cloud infrastructure integration information management device. [Reference numerals] (100) Cloud infrastructure integrated information management device; (111,111a) Cloud infrastructure sharing interface; (112,112a) Cloud infrastructure information management device; (113,113a) Service movement relay module; (120,EE) System 1; (121,BB,DD,FF,HH,JJ) Information collection agent; (AA,GG) System 2; (CC,II) System N
Abstract:
본 발명은 개선된 토큰 버킷 기법을 사용해 각 네트워킹 주체에게 선택적으로 워크 컨서빙 모드를 지원하여 효율적으로 네트워크 대역폭을 관리할 수 있는 네트워크 스케쥴러 및 네트워크 스케쥴링 방법을 제공하는 것으로, 본 발명의 네트워크 스케쥴러는, 토큰 버킷 기법을 이용하여, 할당된 대역폭을 보장해 주거나 남아있는 대역폭을 사용할 수 있도록 하는 워크 컨서빙 모드를 각각의 네트워크 주체인 NSU 별로 선택적으로 지원하고, 토큰 값, 워크 컨서빙 모드의 선택 여부, 및 처리할 패킷 요청의 유무에 따라, 모든 상기 NSU들을 그린 상태, 레드 상태, 옐로우 상태, 및 블랙 상태로 분류하여 관리하는 것을 특징으로 한다. 네트워크, 대역폭제한, 스케쥴링, 트래픽제어, 워크컨서빙모드, 토큰버킷
Abstract:
본 발명은 데이터를 중복 저장하는 읽기 연산 집약 시스템에서 디스크에 저장되는 데이터 블록의 배치를 수정하고 이에 따른 새로운 맵핑 방식을 제공함으로써 기존의 데이터 중복 저장 방식(RAID1)의 장점인 데이터 신뢰성을 유지하면서 동시에 데이터 분산 저장 방식(RAID0)의 빠른 읽기 성능을 갖는 대용량 데이터에 대한 데이터 중복 저장 시스템에 관한 것이다. 본 발명의 데이터 중복 저장 시스템은 원본 데이터를 저장하는 원본 디스크와 상기 원본 데이터에 대한 사본들을 저장하는 다수의 중복 디스크들로 구성되며, 디스크 개수만큼의 연속적인 데이터 블록들을 그룹핑하여 상기 원본 디스크와 상기 중복 디스크들에 대한 SMU들을 생성한 후 각 디스크내에서 SMU 순서에 따라 SMU에 SMUno값을 부여하고 각 SMU내에서의 데이터 블록의 순서에 따라 데이터 블록에 SMUidx값을 부여하며, 동일 SMUno값의 각 SMU내에서의 동일 데이터 블록들은 각 디스크마다 서로 다른 배치 순서를 갖는 것을 특징으로 한다.
Abstract:
PURPOSE: A device and a method for striping a file level are provided to make a volume manager and a file system distribute/store each file data to more than two disks. CONSTITUTION: A plurality of disks(400) actually storing information are accessed through a physical block number. The volume manager(300) forms one large logical volume(200) by logically connecting the disks and records/manages the information need for managing the logical volume to the disks. The file system(100) recognizes the logical volume provided from the volume manager as one storage device. After generating the file on the logical volume, the file system applies a logical block number to the logical volume in order to perform the input/output for the generated file.
Abstract:
PURPOSE: A software structure for integrated memory services and a method for providing the integrated memory services using the software structure are provided to secure a high capacity integrated memory hierarchy of a multiple node system level on an existing system memory hierarchy structure. CONSTITUTION: An integrated memory managing module(110) virtualizes memories from each node. The integrated memory managing module manages the virtualized memories as an integrated memory. An integrated memory service providing module(120) maps the integrated memory to a selected node through a virtual address space. The integrated memory service providing module provides integrate memory services.
Abstract:
A network scheduler and a method thereof for selectively supporting a work serving mode in order to improve the efficiency of a network resource and improve the performance of the whole system are provided to manage network bandwidth efficiently by supporting a work conserving mode. Parent buckets support a token(14) to last child buckets which is in hierarchy structure. The work conserving mode for using the bandwidth guaranteeing the allocated bandwidth is selectively supported according to NSU which is each network subject. NSUs are classified as a green state, a red state, a yellow state and a black state according to existence and nonexistence of the packet request processed with the token value.
Abstract:
A disk I/O scheduler for server virtualization environment and a scheduling method thereof are provided to enhance performance of a virtual system by strictly isolating the allocated I/O resource among virtual systems. Plural queues(110-1,110-2,110-3) performs an I/O(Input/Output) request. When an input request occurs in each system, an I/O request adding unit(120) adds the I/O request to a correspondent queue among the queues. The I/O request extracting unit(130) extracts the I/O request from the queues. The extracted I/O request is added to a device queue(200). The I/O request adding unit includes a queue hash based on the identifier of a virtual system to search the currently-registered I/O queues.
Abstract:
A method for moving a file without copying data is provided to enhance whole performance with reduction of CPU load and costs for copying the data to a user area by moving the file between a source and target device with a buffer page shared in a kernel area. A file moving request is received from an application providing of the user area. The buffer page for the source device is allocated in the kernel area(401). The data of the moved file stored in the source device is copied to the buffer page in a DMA(Direct Memory Access) mode(402). The buffer page is removed from a cache and page management information is changed to allocate the buffer page to the target device(405). The data of the buffer page is copied to the target device in the DMA mode(406).