Abstract:
핸즈프리 자동 통역 서비스를 위한 자동 통역 시스템에 관한 것이다. 핸즈프리 기반의 자동 통역 시스템은 핸즈프리 장치, 단말 장치 및 통역 서버를 포함할 수 있다. 이때, 단말 장치는 핸즈프리 장치의 요청에 따라 통역 환경을 초기화하는 통역 환경 초기화부와, 사용자 및 상대방의 통역 결과를 중개하는 통역 중개부 및 상대방의 통역 결과를 설정된 통역 환경에 기초하여 음성으로 합성하여 핸즈프리 장치에 전송하는 통역 처리부를 포함할 수 있다. 이와 같이, 핸즈프리 장치를 통하여 통역 서비스를 제공함으로써 사용자 편의성이 향상될 수 있다.
Abstract:
본 발명은 음성인식결과 문장에 대한 문형분류장치 및 방법에 관한 것으로, 화자의 발성에 의해 입력된 음성을 인식하고, 음성 인식 결과에 의한 텍스트 문장을 대상으로 형태소를 분석하여, 형태소 분석 결과로부터 해당 문장의 문형을 분류하고, 문형 분류 결과에 따라 해당 문장의 말미에 문장 부호를 추가하는 의미분석모듈을 포함하며, 문장 부호가 추가된 문장을 타겟 언어로 번역하여 그에 대응하는 음성합성음을 출력한다. 본 발명에 따르면, 전체 자동통역 과정 중에서, 음성인식결과 및 이를 입력으로 하는 자동번역에 이르는 과정에서 음성인식결과에 의한 텍스트 문장의 문형을 분류하기 위한 것으로, 음성인식결과의 문형을 식별해 줌으로써, 자동번역 입력문의 문형정보를 보다 정확히 제공하게 되며, 궁극적으로 소스언어를 타겟언어로 자동통역하는 것이 원활히 이루어지게 되는 효과가 있다.
Abstract:
PURPOSE: A word meaning recognizing device using engram is provided to recognize words for inputted sentences by setting the meaning information of the words. CONSTITUTION: A meaning information data managing unit(100) divides sentence corpuses according to an engram unit. The meaning information data managing unit generates engram characteristic values. The meaning information data managing unit generates relative characteristic values of the engram unit according to meaning information. A meaning recognizing unit(300) adds a meaning tag to the word of the inputted sentence based on a comparing result between the meaning information and a relative characteristic value. [Reference numerals] (110) Meaning information establishing unit; (120) Data storage unit; (130) Overlapped data removing unit; (140) N-gram information generating unit; (150) N-gram analyzing unit; (200) Sentence input unit; (300) Meaning recognizing unit
Abstract:
PURPOSE: An interpretation method using an interaction communication between two communication terminals and apparatus thereof are provided to improve a voice recognition performance by automatically establishing an interpretation target language. CONSTITUTION: A communication unit(200) transmits and receives data. A voice recognition unit(202) recognizes the voice input of a user. A interpretation unit(214) interprets a language requested by recognized voice. A voice composition unit(204) synchronizes a interpreted sentence with the voice. A control unit(206) establishes an interpretation target language and other interpretation terminal through the communication unit. The control unit controls the interpretation of the inputted voice and voice output.
Abstract:
PURPOSE: An automatic interpretation device and method is provided to enhance the interpreting performance by using/calculating the similarity between an interpreted sentence and a first language sentence which is a speech recognition result. CONSTITUTION: A speech recognizer(S100) receives the first language speech and generates the first language sentence with speech recognizing process. A language processor(S110) extracts the element included in the first language sentence. A similarity calculation part(S120) compares the extracted element and the element included in the interpreted sentence, and calculates the similarity between the first language sentence and the interpreted sentence. A sentence translation part(S130) translates the first language sentence into the second language sentence according to the similarity. A speech composing part(S140) extracts/composes the speech data corresponding to the second language sentence and outputs the speech signal.
Abstract:
PURPOSE: A sentence pattern classification method and keyword extraction method for the same are provided to rapidly classify the sentence pattern of a sentience without a complex operation through simple keyword matching. CONSTITUTION: A sentence pattern of an input sentence is classified by using a keyword in database(S450). The sentence pattern of the sentence is classified by using a sentence end keyword. The sentence pattern of the sentence is classified by using a mid-sentence keyword if the sentence pattern is not classified based on the sentence end keyword.
Abstract:
PURPOSE: A pattern database apparatus and method thereof and a voice recognition apparatus are provided to output a corrected voice recognition result by using a pattern-based semantic representation. CONSTITUTION: A pattern database apparatus(100) analyzes syntax after analyzing a morpheme according to a voice recognition result. The pattern database apparatus recognizes and extracts additional information. A volume expression, a meaningless expression, and additional information are changed after performing a class change. A voice recognition result is outputted after producing a sentence.
Abstract:
PURPOSE: A method and a device for voice recognition using domain ontology are provided to build a domain ontology of voice recognition target and generate voice recognition grammar applying the built domain ontology and recognize voice through the voice recognition grammar, thereby improving performance of voice recognition device. CONSTITUTION: If a voice signal is inputted through a mike, a feature extraction unit extracts specific vector of a frame unit from the voice signal(S401). A sound model unit provides voice model to a voice recognition unit through modeling the signal characteristic of the voice signal(S403). The voice recognition unit performs voice recognition through a voice model, a voice recognition dictionary(S405), and voice recognition grammar(S407)(S409).
Abstract:
A method and an apparatus for generating an extendable CFG-type voice recognition grammar based on a corpus are provided to describe and extend a CFG-type voice recognition grammar even when the corpus is small to perform continuous voice recognition in a specific area, thereby improving the accuracy and efficiency of the voice recognition. A method for generating an extendable CFG(Context-Free Grammar)-type voice recognition grammar based on a corpus comprises the following steps of: converting the corpus into a CFG-type voice recognition grammar pattern by using thesaurus or converting rules(S200); adding at least one of language used in a conversational style, low-ranking words included in a thesaurus, words used in a corresponding voice recognition area, and synonyms of declinable words to the CFG-type voice recognition grammar pattern to extend the CFG-type voice recognition grammar pattern(S300); and removing impossible to express in meaning in the extended CFG-type voice recognition grammar pattern(S400).