Abstract:
PURPOSE: A policy-based network system for minimizing the loss of policy enforcement and a method for operating the same are provided to minimize the loss of policy enforcement by supplying policy data to a policy client using a caching technique, even though a policy server fails to access a policy cache. CONSTITUTION: A policy-based network system is comprised of a PDP(Policy Decision Point)(104), a policy warehouse(105), and a PEP(Policy Enforcement Point)(102). The PEP(102), a policy client, having two interfaces for a sending terminal(101) and a receiving terminal(103), transfers data from the sending terminal(101) to the receiving terminal(103). The PDP(104), a policy server, decides the policy of the PEP(102) through a continuous connection with the PEP(102). In response to a policy decision request from the PEP(102), the PDP(104) searches the policy warehouse(105) for suitable policy data using an LDAP(Lightweight Directory Access Protocol). The PDP(104) comprises a cache warehouse(114) storing high-priority policy data for policy decision.
Abstract:
PURPOSE: A method for controlling the clock synchronization of a digital process phase locked loop(PLL) is provided to reduce the time required for synchronizing the self generation clock with the external reference clock. CONSTITUTION: A method for controlling the clock synchronization of a digital process phase locked loop(PLL) includes the steps of: (a) determining(S401) whether or not the PLL is first performed; (b) if the PLL is first performed at step (a), storing(S405) the shift time shifting to the following step and the digital/analog control data on a non-volatile memory with performing the PLL; and (c) if the PLL is not first performed at step (a), performing(S409) the PLL during a time being shorter than the process time of the PLL at step (b) by using the shift time shifting to the following step and the digital/analog control data stored at step (b).
Abstract:
PURPOSE: A method for separating a routing function and a forwarding function in a router switch apparatus and an apparatus therefor are provided to classify the routing function and the forwarding function, and independently process general data except for routing data in a part for processing the forwarding function. CONSTITUTION: A routing information management unit receives a routing frame, generates and stores routing information, and receives and stores ARP(Address Resolution Protocol) information. Forwarding units(205,211) receive and store routing information from the routing information management unit(100), and receive a communication frame from an external communication network. If the communication frame is a routing frame, the forwarding units(205,211) provide the routing frame to the routing information management unit. If the communication frame is an ARP frame, the forwarding units(205,211) analyze the ARP frame, generate and store ARP information, and provide ARP information to the routing information management unit(100). If the communication frame is a general data frame, the forwarding units(205,211) extract the destination of the general data frame on the basis of routing information and ARP information, and transmit the general data frame to the destination. A switch unit(210) relays a data communication between the forwarding units(205,211) and between the forwarding units(205,211) and the routing information management unit(100).
Abstract:
PURPOSE: A method for controlling flow by using an active network technique in multipoint communication is provided to provide the scalability even though receivers are increased or are in a wide area, and minimize packet loss in even case that congested situation is generated within a network. CONSTITUTION: Data packets arrive at receivers and intermediate nodes(201). It is determined whether the packets are the first arrived packets of multicast communication(202). A soft storage in the node is initialized to start controlling multicast flow included in a program(203). Data packets arrived at the nodes, are saved at a buffer(204). An output port enabling transmission is searched(207). If data packet is in priority to be transmitted to the corresponding output port(208), data packet is duplicated ad transmitted to bottom nodes in case that the number of allowable credit to the corresponding bottom node is larger than 0(209). If the current node is the final receiver(205), packets are transmitted to an application layer and counter is increased by 1(206). If the counted value becomes a reference counter of flow control and a double number of credit update unit(211), the current node decides the amount of packets receivable from the top node based on the size of virtual buffer to each bottom node and generates credit packets including related information to the top node(213). It is determined whether data packets were transmitted to all corresponding output ports or the application layer(214).
Abstract:
PURPOSE: A packet transferring device and a method using dual port ram are provided to speedup transferring of IP packets by using separate local bus and dual port ram. CONSTITUTION: The packet transferring device comprises a cell transferring element for transferring ATM(Asynchronous Transfer mode) cell; a cell processing elements which reconstruct the ATM cell received from the cell transferring element to send to a packet transferring element and then dividing the received IP packet from the packet transferring element to ATM cell to transmit to the cell transferring element; a packet transferring element which saves the IP packet transmitted from the cell processing element on a dual port ram packet memory and then transmitting the packet to the packet processing element via a IP packet receiving mode cue of a local memory; a packet processing element for processing the transmitted IP packet from the dual port ram packet memory according to a processing function of IP upper layer protocol. Wherein, the IP packet transferring element saves the IP packet processed form the packet processing element on the dual port ram packet memory and then transmits to the cell processing element via an IP packet transmitting buffer cue of the local memory.
Abstract:
PURPOSE: A method for controlling a dynamic combined use timer based call connection in an ATM(asynchronous transfer mode) adaptive layer 2 is provided to minimize a consumption of a bandwidth by dynamically controlling a Time_CU value to reduce the time out number and use a remained bandwidth in a traffic for an available bit rate/unspecified bit rat service. CONSTITUTION: When a call is requested(S501), a cell assembly delay time is tested(S503). A Time_CU is increased(S505) and the cell assembly delay time is again tested(S507). The Time_CU is reduced to a previous value(S509) and a call request is received(S511). The Time_CU is compared with a MAX_TCU(S513). When the Time_CU is greater than the MAX_TCU, the call request is received(S511). When the Time_CU is less than the MAX_TCU, steps S505 to S513 are sequentially performed. The Time_CU is reduced(S515) and the cell assembly delay time is again tested(S517). The Time_CU is compared with a MIN_TCU(S519). When the Time_CU is greater than the MIN_TCU, the call request is rejected(S521). When the Time_CU is less than the MIN_TCU, steps S515 to S519 are sequentially performed.
Abstract:
1. 청구범위에 기재된 발명이 속하는 기술분야 본 발명은 ATM 호스트 어뎁팅 장치에 관한 것임. 2. 발명이 해결하고자하는 기술적 요지 본 발명은 ATM 호스트에서 소정 주기마다 순간적으로 셀을 가산하여 공유 매체상의 트래픽 특성을 손상시키는 현상을 막을 수 있는 ATM 호스트 어뎁팅 장치를 제공하는데 그 목적이 있다. 3. 발명의 해결 방법의 요지 본 발명은 패킷 정보, 수신 셀 및 파라미터를 저장하고 있는 저장수단; 시스템 접속수단과, 마스터수단과, 슬레이브수단과, 프로세싱수단과, 정합수단과, 접속수단과, 중재수단을 갖는 망접속 조절수단; 및 상기 저장수단과 망 접속 조절수단을 접속하는 부접속수단을 포함한다. 4. 발명의 중요한 용도 본 발명은 일반적인 ATM 호스트 뿐만 아니라, 소규모 공유 매체 형태의 ATM 망에 접속된 ATM 장치에도 채용하는데 이용됨.
Abstract:
본 발명은 ASIC 등의 회로에서 여러 기능 블럭이 외부의 메모리를 사용할 때 이 블럭간의 메모리 사용요구를 중재하여 처리하며 외부의 메모리의 속도에 상관없이 단일한 인터페이스를 제공하는 것을 목적으로 하며 본 발명은 읽기회로, 쓰기회로, 그리고 읽기/쓰기 중재회로로 나뉘어 지며, 읽기회로는 다시 읽기 중재회로, 읽기 타이밍 제어회로, 데이타 전달회로로 나뉘어 지고, 쓰기회로는 다시 쓰기 중재회로, 쓰기 타이밍 제어회로로 나뉘어 진다. 읽기회로와 쓰기회로는 외부 메모리와의 속도정합을 위하여 각각 FIFO를 가지고 있다.