Abstract:
핸즈프리 자동 통역 서비스를 위한 자동 통역 시스템에 관한 것이다. 핸즈프리 기반의 자동 통역 시스템은 핸즈프리 장치, 단말 장치 및 통역 서버를 포함할 수 있다. 이때, 단말 장치는 핸즈프리 장치의 요청에 따라 통역 환경을 초기화하는 통역 환경 초기화부와, 사용자 및 상대방의 통역 결과를 중개하는 통역 중개부 및 상대방의 통역 결과를 설정된 통역 환경에 기초하여 음성으로 합성하여 핸즈프리 장치에 전송하는 통역 처리부를 포함할 수 있다. 이와 같이, 핸즈프리 장치를 통하여 통역 서비스를 제공함으로써 사용자 편의성이 향상될 수 있다.
Abstract:
The present invention relates to a retraining method by complementing insufficient audio data for a particular language and without changing the structure of the acoustic model simultaneously using a joint phone which is included in the multilingual audio data. Speech data for each language must be fully prepared in order to create an acoustic model of a multi-continuous speech recognition device but consumes a lot of costs and time in general. And the present invention can define common phonemes to be used for a phoneme symbol which is acoustically same between multiple languages. Provided is a selective retraining method using the common phoneme in the language which has a large amount of the audio data in order to train the language of insufficient memory on the basis of above.
Abstract:
본 발명은 음성인식결과 문장에 대한 문형분류장치 및 방법에 관한 것으로, 화자의 발성에 의해 입력된 음성을 인식하고, 음성 인식 결과에 의한 텍스트 문장을 대상으로 형태소를 분석하여, 형태소 분석 결과로부터 해당 문장의 문형을 분류하고, 문형 분류 결과에 따라 해당 문장의 말미에 문장 부호를 추가하는 의미분석모듈을 포함하며, 문장 부호가 추가된 문장을 타겟 언어로 번역하여 그에 대응하는 음성합성음을 출력한다. 본 발명에 따르면, 전체 자동통역 과정 중에서, 음성인식결과 및 이를 입력으로 하는 자동번역에 이르는 과정에서 음성인식결과에 의한 텍스트 문장의 문형을 분류하기 위한 것으로, 음성인식결과의 문형을 식별해 줌으로써, 자동번역 입력문의 문형정보를 보다 정확히 제공하게 되며, 궁극적으로 소스언어를 타겟언어로 자동통역하는 것이 원활히 이루어지게 되는 효과가 있다.
Abstract:
PURPOSE: A word meaning recognizing device using engram is provided to recognize words for inputted sentences by setting the meaning information of the words. CONSTITUTION: A meaning information data managing unit(100) divides sentence corpuses according to an engram unit. The meaning information data managing unit generates engram characteristic values. The meaning information data managing unit generates relative characteristic values of the engram unit according to meaning information. A meaning recognizing unit(300) adds a meaning tag to the word of the inputted sentence based on a comparing result between the meaning information and a relative characteristic value. [Reference numerals] (110) Meaning information establishing unit; (120) Data storage unit; (130) Overlapped data removing unit; (140) N-gram information generating unit; (150) N-gram analyzing unit; (200) Sentence input unit; (300) Meaning recognizing unit
Abstract:
PURPOSE: A light detection device and a manufacturing method thereof are provided to simplify manufacturing processes by forming a self-aligned light absorption layer and a first conductive pattern without an additional pattern. CONSTITUTION: An insulating pattern(106) includes a groove which exposes a part of a first conductive pattern(102). A light absorption layer(110) fills the groove of the insulating pattern. The light absorption layer has an upper surface which is arranged higher than the upper surface of the insulating pattern. A second conductive pattern(112) is arranged on the light absorption layer. Connection terminals(116) is respectively and electrically connected to the first conductive pattern and the second conductive pattern.
Abstract:
PURPOSE: A sound model generating apparatus and a method thereof are provided to automatically search for a penalty value about complexity of a sound model of an MDL(Minimum Description Length) standard. CONSTITUTION: A binary tree generating unit(101) generates a binary tree by repetition of Gaussian components in an HMM(Hidden Markov Model) state based on distance standards. An information generating unit(102) generates the maximum scale information of the sound model according to a platform(111) including a sound recognition unit(112). A binary tree reduction unit(103) reduces the binary tree according to the maximum scale information of the sound model.
Abstract:
PURPOSE: An interpretation method using an interaction communication between two communication terminals and apparatus thereof are provided to improve a voice recognition performance by automatically establishing an interpretation target language. CONSTITUTION: A communication unit(200) transmits and receives data. A voice recognition unit(202) recognizes the voice input of a user. A interpretation unit(214) interprets a language requested by recognized voice. A voice composition unit(204) synchronizes a interpreted sentence with the voice. A control unit(206) establishes an interpretation target language and other interpretation terminal through the communication unit. The control unit controls the interpretation of the inputted voice and voice output.