Abstract:
PURPOSE: An OS(Operating System) conversion method in an information processing system is provided to rapidly convert OSs and to maintain a work environment by using a standby mode for the OSs. CONSTITUTION: When power is applied to an information processing system, an OS converter is executed. When the first OS acquires an OS conversion command, the identification information of a second OS and OS conversion information are stored in a nonvolatile memory(420). The OS converter converts the first OS into the second OS when the first OS is executed as a standby mode(470). [Reference numerals] (410) Executing a first operation system; (420) Obtaining a command for switching the operation system to a second operation system; (430) Storing identification information of the second operation system and information indicating switching of the operation system; (441) Storing a restarting address of the first operation system; (443) Storing information indicating the internal condition of a processor in a memory; (445) Stopping input and output devices of the system; (450) Is the second operation system in a standby mode?; (460) Restarting the second operation system; (470) Booting the second operation system; (AA) Start; (BB) Yes; (CC) No; (DD) End
Abstract:
본 발명은 정책기반의 데이터 처리시스템 및 그 방법에 관한 것이다. 본 발명에 따르면, 데이터 처리시스템의 패턴 해석부는 패턴을 토대로 패턴 핸들러를 생성하고, 생성된 패턴 핸들러를 정책 기반으로 스케줄링하여 데이터 필터링 및 그룹핑을 수행한다. 또한, 이벤트 데이터의 각 데이터 타입 별로 대응하는 처리유형에 해당하는 처리함수를 객체 모듈화하고, 패턴 핸들러를 통해 핸들링하여 사용한다. 정책, 패턴, 필터, RFID, USN
Abstract:
PURPOSE: A page table synchronization apparatus and method thereof are provided to execute delaying synchronization according to need between a shadow page table and a page table of a guest OS(Operating System) in an ARM(Advanced RISC Machine) computing structure. CONSTITUTION: A page table synchronization unit(220) reflects a synchronization target list to a virtual second page table. The page table provides a process control function to a guest OS. The page table synchronization unit synchronizes the first page table of the guest OS with the second page table. In case a cause which generates data faults is not predetermined errors, the page table synchronization unit transmits information related to the cause to the guest OS. The page table synchronization unit synchronizes the first page table with the second page table.
Abstract:
PURPOSE: An apparatus and a method for virtualizing a memory are provided to improve the performance of a virtualizing system by rapidly mapping memories in order to change page tables of a guest operating system in the virtualizing system. CONSTITUTION: Shadow page tables(62) are generated respectively for guest operating systems. A guest page table is provided from one of the guest operating systems as the reference of a physical memory page. A processor processes mapping information which maps one of the shadow page tables and the guest page table. The mapping information is machine address information in which the guest operating system is loaded and is the virtual address information of the guest page table.
Abstract:
PURPOSE: A method and a system for processing data filtering based on a policy are provided to perform filtering and grouping of data by applying a pattern based on a policy and not simply pattern matching, thereby reflecting the data filtering and grouping requirement of an application service. CONSTITUTION: A pattern handler(171) handles processing functions for applying corresponding patterns to each filtered of event data. If a pattern is inputted, a pattern handler generator(172) generates the pattern handler according to the inputted pattern. A scheduler unit(173) determines the application order of the inputted pattern based on a policy. The scheduler unit controls the driving and termination operations of the pattern handler according to a determined applying order.
Abstract:
A network scheduler and a method thereof for selectively supporting a work serving mode in order to improve the efficiency of a network resource and improve the performance of the whole system are provided to manage network bandwidth efficiently by supporting a work conserving mode. Parent buckets support a token(14) to last child buckets which is in hierarchy structure. The work conserving mode for using the bandwidth guaranteeing the allocated bandwidth is selectively supported according to NSU which is each network subject. NSUs are classified as a green state, a red state, a yellow state and a black state according to existence and nonexistence of the packet request processed with the token value.
Abstract:
A disk I/O scheduler for server virtualization environment and a scheduling method thereof are provided to enhance performance of a virtual system by strictly isolating the allocated I/O resource among virtual systems. Plural queues(110-1,110-2,110-3) performs an I/O(Input/Output) request. When an input request occurs in each system, an I/O request adding unit(120) adds the I/O request to a correspondent queue among the queues. The I/O request extracting unit(130) extracts the I/O request from the queues. The extracted I/O request is added to a device queue(200). The I/O request adding unit includes a queue hash based on the identifier of a virtual system to search the currently-registered I/O queues.
Abstract:
A communication interface apparatus between application programs on a virtual machine using a sharing memory and a method therefor are provided to directly write data in the sharing memory during data transmission, thereby improving communication performance. A communication interface apparatus between application programs on a virtual machine using a sharing memory comprises a request divergence unit(420), a TCP(Transmission Control Protocol) socket connection unit(430) and a sharing memory connection unit(440). The sharing memory connection unit sets sharing memory connection through set TCP socket connection. The sharing memory connection unit transceives data with the second socket application program through the set sharing memory connection according to socket request information for transceiving data diverged by the request divergence unit.
Abstract:
본 발명은 입출력 가속 기술이 적용된 하드웨어용 파일 시스템 및 그 파일 시스템에서의 데이터 처리 방법에 관한 것이다. 이 파일 시스템의 메소드 관리자는 디스크에 저장된 파일에 대한 액세스를 필요로 하는 메소드를 받아서 상기 입출력 가속 기술이 적용된 하드웨어에 대한 블록 디바이스 처리와 문자 디바이스 처리 중 어느 하나를 선택하여 상기 파일에 대한 액세스를 제어한다. 저속-경로 관리자 메소드 관리자의 제어에 따라 입출력 가속 기술이 적용된 하드웨어에 대한 블록 디바이스 처리를 수행한다. 고속-경로 관리자는 메소드 관리자의 제어에 따라 입출력 가속 기술이 적용된 하드웨어에 대한 문자 디바이스 처리를 수행한다. 캐쉬 관리자는 저속-경로 관리자의 제어에 의해, 디스크에 저장된 데이터의 일부를 캐슁하는 캐쉬를 통해 디스크에 저장된 파일 액세스를 수행한다. 입출력 가속 관리자는 고속-경로 관리자의 제어에 의해, 입출력 가속 기술이 적용된 하드웨어를 통하여 디스크에 저장된 데이터의 입출력 가속을 위한 메모리 영역으로의 복사 동작이 수행되도록 제어한다. 본 발명에 따르면, 입출력 가속 기술이 사용된 하드웨어에 대해서도 기존의 유닉스 파일 시스템의 표준 인터페이스를 통한 지원이 가능해진다. 파일 시스템, 리눅스, 입출력 가속, EXT2, EXT3, 문자 디바이스, PMEM