Abstract:
미디어 게이트웨이 성능 시험을 위한 호 패턴 생성 장치, 미디어 게이트웨이 성능 시험 장치 및 그 방법이 개시된다. 샘플 호 패턴을 조합하여 미디어 게이트웨이 성능 시험을 위한 호 패턴을 생성하고, 생성한 호 패턴에 따라 호 생성 및 해제 절차를 구성한 후, 구성한 절차에 따라 해당하는 MEGACO 프로토콜 메시지를 생성한다. 그리고, 가상의 미디어 게이트웨이 및 가상의 미디어 게이트웨이 컨트롤러를 포함하는 가상 망에서 MEGACO 프로토콜 메시지에 따라 시험 대상 미디어 게이트웨이에 호 설정 및 호 해제를 요구한다. 이로써, 실제 환경에 유사한 호 패턴을 이용하여 미디어 게이트웨이의 성능을 정확하게 측정할 수 있다. 호 패턴, MEGACO 프로토콜, 미디어 게이트웨이, 성능 시험
Abstract:
본 발명은 인터넷에서 패킷 기반의 차등 서비스(DiffServ)를 제공하는 기술에 관한 것으로, 특히 차등 서비스망에서 클래스(class)를 갱신하는 장치 및 방법에 관한 것이다. 이러한 본 발명의 장치는 소정 알고리즘에 따라 타이머 테이블을 갱신하고 마커 패킷의 생성을 요구하는 메시지를 발생하는 타이머 타스크 모듈; 각종 테이블과 리스트를 관리하기 위한 차등 서비스 제어모듈; 이전노드로부터 인입된 패킷을 처리하기 위한 입력모듈; 상기 타이머 타스크 모듈로부터 메시지를 수신하면 마커 패킷을 생성하고, 소정의 절차에 따라 상기 테이블과 리스트를 참조하여 상기 입력모듈을 통해 입력된 패킷의 클래스을 갱신하는 마커생성 및 클래스 갱신모듈; 및 상기 마커생성 및 클래스 갱신모듈의 출력에 따라 입력 패킷을 스케쥴링하여 다음 노드로 전송하는 스케쥴러 모듈을 포함한다. 따라서 인터넷 상에서 본 발명에 따른 차등 서비스 클래스 갱신기를 이용할 경우 가입자와 서비스 제공자간 협약으로 체결된 플로우별 단대단 지연 및 손실 요구사항을 만족시킬 수 있으며, 현재 부하에 따른 동적인 차등 서비스 및 클래스 부하 분산을 가능하게 하는 장점을 제공한다.
Abstract:
The present invention provides a Session Initiation Protocol (SIP)-based load balancing apparatus and method. The apparatus receives a message transmitted from a user at a position in front of a plurality of proxy servers each connected in parallel and decodes the received message. The apparatus selectively performs addition, renewal and deletion of user information according to an expiration field of a header and transmits the decoded message to a proxy server, if the decoded message is a REGISTER message, searches for a proxy server that will handle a destination address, increases a load of the proxy server and transmits the decoded message to the proxy server, if the decoded message is a INVITE message, and examines a proxy server of the destination address and transmits the decoded message to the proxy server, if the decoded message is a BYE message.
Abstract:
PURPOSE: A method of automatically recognizing network node shape management information by using an SNMP(Simple Network Management Protocol) is provided to automatically recognize the shape management information of a recognized network management target node by using the SNMP, and to connect the information with the target node, thereby easily performing a shape management function. CONSTITUTION: A network management system stores shape management information of a network management target node in a storage(S201). When the target node is recognized(S202), the system decides whether the target node supports an SNMP(S203). If so, the system inquires of the target node about system object ID management information(S204). The target node responses to the inquiry based on the SNMP, and the system receives the response(S205). The system decides whether the shape management information exists in the storage(S206). If so, the system includes the information in management information of the target node(S208). If a manager requests the shape management information to be updated(S209), the system updates the shape management information(S207).
Abstract:
PURPOSE: A fast page mode DRAM accelerator using a buffer cache is provided to maximize performance of an embedded system using the fast page mode DRAM by reducing a speed difference between a processor memory controller of a fast synchronous mode and a fast page mode DRAM offering a fast page mode burst cycle. CONSTITUTION: A processor interface controller(402) receives a memory control signal and the memory cycle address information from a processor. An address comparing part(405) generates a confirmation signal if the processor starts an operation filling an internal cache by receiving the memory cycle address information from the processor interface controller. The buffer cache(408) temporarily stores the data of the fast page mode DRAM. A buffer cache controller(406) updates the buffer cache by executing a burst cycle according to the confirmation signal from the address comparing part. A fast page mode DRAM address controller(404) transfers the memory cycle address needed from a read or write cycle of the fast page mode DRAM to the fast page mode DRAM.
Abstract:
PURPOSE: A router building a VoIP gateway therein is provided to build the VoIP gateway therein without another gateway, and to add a line card when a gateway function is necessary without another network interface, thereby easily expanding a system at a low price. CONSTITUTION: At least more than one line card(302) is connected to another communication networks, and performs a line interface function and a forwarding function of a network for inputting/outputting packets. A switch fabric(306) switches the packets exchanged between the line cards(302). A gateway(206) is connected to a PSTN, and relays call control signals and voice signals between the PSTN and the line cards(302). A main processor(305) controls the gateway(206) and the line cards(302). The gateway(206) is connected to the PSTN and a digital data communication network, and relays the signals between the PSTN and the digital data communication network without passing through the switch fabric(306).
Abstract:
PURPOSE: An apparatus and a method for updating a flow class in differential service network are provided to prevent burst congestion due to a specific class and balance load. CONSTITUTION: An inputted packet is processed(401). It is determined whether a delay calculation request message is received from a timer task module, and if the message arrives, a marker is generated(403), delay is calculated(404), the delay sum is inserted and a corresponding table is updated(405). If there is no delay calculation request, a kind of the packet is discriminated(406). If the packet is a marker packet, it is determined whether it is an edge node(408). If it is edge node and a 'D' bit is '1', a class is updated(410). IF the 'D' bit is '0', the delay sum is inserted and the table is updated(412), and then, the marker packet is returned(413).
Abstract:
PURPOSE: A method for supplying DiffServ(Differentiated Service)-based VoIP QoS information through a router is provided to add VoIP flow and QoS information to a flow table for performing a packet forwarding process, and to transmit the added information to an intermediate router, thereby supplying QoS information on a VoIP packet by sharing session information. CONSTITUTION: An SIP(Session Initiation Protocol) server(620) and a router(631) have TCP connections to an active QoS control server(610) by using a TCP port(S601,S602,S605). The router(631) informs the QoS control server(610) of router configuration information(S603,S606). The SIP server(620) informs the QoS control server(610) of the configuration information. When the router configuration information and the configuration information of the SIP server(620) are changed, the router(631) and the SIP server(620) transmit a changed configuration information message to the QoS control server(610) again(S607,S608,S609). The QoS control server(610) updates the configuration information.
Abstract:
PURPOSE: A network management method using availability is provided to compare an availability prediction value calculated through a reliability analysis before operating a network management system with an availability measurement value calculated during network operation and set a suitable maintenance time again. CONSTITUTION: A network manager calculates an MTBF(Mean Time Between Failures) according to each network management agent, and calculates a prediction availability value using the MTBF(S301). If fault is generated in a specific network management agent before a maintenance time set by the prediction availability value elapses(S303,S304), the network manager receives the notification of the fault generation and calculates an MTBF measurement value of the specific network management agent(S305). The network manager recovers the fault of the specific network management agent(S306). The network manager calculates an MTTR(Mean Time To Repair) measurement value until the recovery of the fault is completed(S307), and calculates a measurement availability value of the specific network management agent(S308). The network manager compares the prediction availability value with the measurement availability value, and sets a less value as a maintenance time again(S309,S310,S314).
Abstract:
PURPOSE: A fast page mode DRAM accelerator using a buffer cache is provided to maximize performance of an embedded system using the fast page mode DRAM by reducing a speed difference between a processor memory controller of a fast synchronous mode and a fast page mode DRAM offering a fast page mode burst cycle. CONSTITUTION: A processor interface controller(402) receives a memory control signal and the memory cycle address information from a processor. An address comparing part(405) generates a confirmation signal if the processor starts an operation filling an internal cache by receiving the memory cycle address information from the processor interface controller. The buffer cache(408) temporarily stores the data of the fast page mode DRAM. A buffer cache controller(406) updates the buffer cache by executing a burst cycle according to the confirmation signal from the address comparing part. A fast page mode DRAM address controller(404) transfers the memory cycle address needed from a read or write cycle of the fast page mode DRAM to the fast page mode DRAM.