Abstract:
PURPOSE: An accompaniment/voice separating device using an independently interpreting algorithm in the secondary omnidirectional network and a method thereof are provided to use an independently interpreting algorithm for estimating a mixing process of signals with regards to acoustic signals where accompaniment signals are mixed with voice signals, thereby separating the voice signals from the accompaniment signals within the shortest time. CONSTITUTION: An independent interpreter(110) receives output signals consisting of accompaniment signals and voice signals, and outputs the first, second, third, and fourth multiplication coefficients. An accompaniment signal selector(120) outputs a multiplex control signal having the first logical state by corresponding to the second logical state of the most significant bit of the second multiplication coefficients, and having the second logical state by corresponding to the second logical state of the most significant bit of the third multiplication coefficients. A filtering unit(130) generates the first output signal among output signals which add up two signals for processing R and L channel signals with the first and third multiplication coefficients, and generates the second output signal among output signals which add two signals for processing with the second and fourth coefficients. A multiplexer(140) selectively outputs the first or the second output signal.
Abstract:
A line based image matching method in which a model image indexed by similar shape descriptors to a query image is retrieved from an image database indexed by line based shape descriptors. The line based image matching method involves: collecting line information of a query image and model images; defining the binary relation between the lines of the query image and the lines of the model images; measuring the compatibility coefficients of the node-label pairs of the query and model images based on the binary relation; and measuring the similarity between the query and model images on the basis of continuous relaxation labeling using the compatibility coefficient.
Abstract:
본발명은, 비디오영상에서사람과배경을분리하기위한방법에있어서, 제1 프레임에서의사람영역과배경영역을추정하는단계; 상기추정된사람영역과배경영역각각에대한가우시안혼합모델을생성하는단계; 상기사람영역에대한가우시안혼합모델과상기배경영역에대한가우시안혼합모델을합쳐전체영상에대한가우시안모델을생성하고, 제2 프레임의정보를이용하여상기전체영상에대한가우시안혼합모델을갱신하는제1 갱신단계; 상기갱신된전체영상에대한가우시안혼합모델을이용하여에너지함수생성하고, 생성된에너지함수의최소화를통해상기제2 프레임의사람영역을분리하는단계; 를포함할수 있다.
Abstract:
색 특징 및 질감 특징을 적절히 조합함으로써 보다 검색 성능이 향상된 영상 검색 방법이 개시된다. 본 발명에 따른 영상 검색 방법은 (a) 쿼리 영상과 데이터 영상 사이의 색 거리와 질감 거리를 구하는 단계와, (b) 구한 색 거리와 질감 거리에 소정의 가중치를 사용하여 가중시키는 단계, (c) 인간의 시감적 특성을 고려하여 가중된 색거리와 질감 거리를 조합한 거리를 구하는 단계, 및 (d) 조합된 특징 거리를 사용하여 쿼리 영상과 유사한 영상을 결정하는 단계를 포함하는 것을 특징으로 한다. 본 발명에 따르면 영역 영상에서 추출된 색과 질감 특징을 이용하여 이들 정보를 동시에 조합함으로써 보다 사람 시각적인 감각에 적합한 검색 결과를 얻을 수 있다. 특히, 영역별 검색으로 한 영상에 있는 많은 물체(object)와 정보를 적은 계산에 의하여 보다 정밀하게 검색하는 것이 가능하다.