Abstract:
A method and a system for providing a service that changes background image of moving pictures in mobile environment are provided to change a background image of a target object to a background image chosen by a user of the mobile terminal in moving-picture data which is photographed in real time. A framing area is selectively received from a user of a mobile terminal(S610). A preview screen about the selected framing area is provided(S620). An object matched with the selected framing area is extracted from moving-picture data(S640). The extracted object is synthesized with the background picture which is chosen by the user and is provided to the mobile terminal(S650).
Abstract:
본 발명은 다수의 기본 블록들의 다차원 구성을 통한 다단계 변환 장치 및 그 방법에 관한 것이다. 본 발명은 블록 단위의 원영상 또는 차영상의 DCT 변환계수를 사용하는 비디오 데이터 압축의 성능을 향상시키기 위하여 인접한 블록들의 변환 계수들을 모아서 추가적인 변환을 수행함으로써, 압축 효율을 향상시키기 위한, 다수의 기본 블록들의 다차원 구성을 통한 다단계 변환 장치 및 그 방법을 제공하는데 그 목적이 있다. 본 발명은 입력되는 영상데이터를 DCT 변환하고, 상기 DCT 변환된 소정크기의 블록 R개(R은 2이상의 자연수)를 선택하는 단계; 상기 선택된 R개의 각 블록에서 동일한 주파수의 변환계수들을 1차원으로 배열하는 단계; 및 상기 1차원으로 배열된 변환계수들을 다시 일차원 변환하는 단계;를 포함하는 다수의 기본블록들의 다차원 구성을 통한 다단계 변환 방법을 개시한다. 다차원 변환, 다단계 변환, 다차원 구성, 이차원 변환
Abstract:
An apparatus and method for multiplexing data through prediction in a DMB(Digital Multimedia Broadcast) are provided to offer more interactive contents effectively contents by effectively using a transmission bandwidth by predicting an empty packet region. A data management unit(110) stores supplementary data desired to be additionally transmitted through a DMB. A data multiplexing unit(120) analyzes an empty packet region in a TS(Transport Stream) inputted from a digital multimedia broadcasting device to predict an empty packet region in a TS to be inputted next, and inserts the stored supplementary data in the predicted empty packet region and multiplexes the same. The data management unit sequentially transfers supplementary data to be transmitted according to a request of the data multiplexing unit. The data multiplexing unit includes a TS analyzing unit for analyzing the empty packet region, a data packetizing unit for packetizing the supplementary data, a data inserting unit for inserting the packetized supplementary data and multiplexing the same, and an insertion controller for predicting the empty packet region and controlling transferring of the supplementary data and the data insertion.
Abstract:
An apparatus for audio encoding and decoding using warped linear prediction coding, and a method thereof are provided to remove the redundancy of an original signal by using the warped linear prediction coding in an audio encoding process, provide an error signal to an audio encoder as an input signal, and transform a psychological sound model to be suitable for the error signal, thereby increasing the efficiency of audio signal compression and performing audio signal encoding. An error signal calculating unit(110) performs the warped linear prediction coding of an audio signal inputted from the outside in a temporal area to calculate an error signal. A frequency domain converting unit(120) converts the error signal obtained in the error signal calculating unit into a frequency domain signal. A masking threshold value calculating unit(131,132) calculates a masking threshold value used in the encoding of the error signal by using an original signal and encoding information used in the warped linear prediction coding of the original signal. A perceptual encoding unit(140) performs the perceptual encoding of the error signal converted in the frequency domain converting unit by using the calculated masking threshold value.
Abstract:
A coding/decoding apparatus using DCT(Discrete Cosine Transform) coefficient scanning adaptive according to pixel similarity and a method thereof are provided to enhance the compression rate of intra encoding by applying the most efficient scanning method according to pixel similarity and coding or decoding images. A coding apparatus using DCT coefficient scanning comprises a mode selection part(10), an intra prediction part(20), a DCT and quantization part(30), and an entropy coding part(40). The mode selection part(10) selects the optimum mode for intra prediction. The intra prediction part(20) executes intra prediction for an inputted image, based on the selected mode. The DCT and quantization part(30) executes DCT and quantization for the residual coefficients outputted from the intra prediction part(20). Using a certain scanning mode determined according to the pixel similarity of the residual coefficients, the entropy coding part(40) performs entropy coding for the quantized DCT coefficients.
Abstract:
본 발명은 웹기반 협력학습 환경에서 학습 참여자간 상호작용을 높이기 위해 교사의 역할을 대신하여 학습자 활동을 모니터링하고, 모니터링 한 결과를 분석하여 자동으로 조언을 생성하는 학습지원 시스템 및 방법에 대한 것으로, 학습자의 협력학습 행위를 정의한 협력학습 행위 참조모델을 적용하여 학습자의 협력학습 행위를 자료구조화하여 데이터베이스에 저장하고, 협력학습 촉진을 위한 협력학습 조언유형별 규칙에 따라 자동으로 조언을 생성함으로써, 협력학습 증진과 이러닝 운영의 효율성을 높이는 효과가 있으며, 교사의 업무부하에 대한 대안으로 이용할 수 있다. 웹기반 협력학습, 협력학습 모니터링, 협력학습 자동조언생성, 협력학습 행위 참조모델, 이-러닝
Abstract:
A web-based collaborative learning supporting system for promoting collaborative learning and a method thereof are provided to promote interaction among learners, and support activation of individual and collaborative learning by monitoring collaborative learning action of the learner and automatically generating an advice based on a monitoring result. A monitoring module(210) collects and stores collaborative learning action information by monitoring the action of the collaborative learners. A workplace database(220) stores the collected collaborative learning action information. An automatic advice generating module(230) automatically generates and transfers the advice for a current learner state according to the collected collaborative learning action information based on a rule for each collaborative learning advice type. The monitoring module collects the collaborative learning action of the learner through packet filtering by using a predetermined collaborative learning action reference model. The workplace database forms a database by collecting the collaborative learning action of the learners based on the collaborative learning action reference model.
Abstract:
본 발명은 현재 이러닝(e-Learning) 업계 표준으로 인정받고 있는 PC기반의 학습 방식인 SCORM학습 방식을 MPEG-2 기반의 디지털방송시스템에서 구현할 수 있도록 하는 디지털 방송 시스템의 SCORM 기반 학습콘텐츠 서비스 장치 및 그 방법에 관한 것이다. 이와 같은 본 발명은 사용자가 인증이 되면, 단말 종류 정보를 리턴 채널을 통해 전송 한 후, 방송망을 통해 수신되는 학습 TS를 수신되는 XML 동기화 정보에 따라 디지털 TV에 재생하는 셋탑 박스와, 상기 셋탑 박스에서 전송된 단말 종류 정보에 따라 셋탑 박스에서 지원 가능한 API 어댑터를 리턴 채널을 통해 전송하는 학습 관리 시스템과, 상기 학습 관리 시스템에서 전송된 학습 동영상 데이터, 학습 데이터 및 시퀀싱 정보를 패키징하여 학습 TS를 생성한 후 XML 동기화 정보를 생성하여, 그 학습 TS와 XML 동기화 정보를 방송망을 통해 송신하는 디지털 TV 송신기를 포함하여 구성된다. e-Learning, T-Learning, SCORM, MPEG-2, 디지털방송, 교육
Abstract:
Disclosed a method of designing a watermark having the power spectral density optimized so that the detection performance can be improved by employing the whitening filtered detection after the Wiener attack. The power spectral density of the watermark is designed using an optimization method that can improve the entire detection performance by reflecting the gain of the whitening filter after the Wiener attack. A higher detection gain is obtained using the whitening filter after the Wiener attack, and the expected value of the difference between test statistics of the two hypotheses that the watermark exists and the watermark does not exist, respectively, is maximized to optimize the detection performance. Regarding the expected value of the difference between the test statistics as an objective function, the power spectral density of the watermark, which corresponds to a maximum differentiated value of the power spectral density of the watermark using the Lagrange multiplier method, is obtained.
Abstract:
PURPOSE: A device and method for extracting a character characteristic for recognizing a many languages printed letter document is provided to enhance a character recognizing rate in many languages printed letter document by extracting a character portion of an input character image and a geometric property from a fixed sized mesh. CONSTITUTION: An input device(101) inputs a character necessary for extracting a character property. A standard character set constructing device(102) constructs a standard input character set by printing many languages characters of inputted all sorts of fonts as various character sizes in a fixed form. A database constructing device(103) constructs a standard character image database by receiving a character image by differing a resolution and a concentration of the standard input character set using a scanner. A size normalizing device(104) normalizes the inputted character image as a fixed size. A converting device(105) converts the size-normalized character image into a property of mesh shape of 16X16 size through 3X3 mask operation. A stroke property extracting device(106) extracts a property of a character portion out of geographic information of each character in a mesh. A non-stroke property extracting device(107) extracts a property of a background portion out of geographic information of each character in a mesh. A character property extracting device(108) extracts a property of a character from the stroke property and the non-stroke property. A storing device(109) stores the extracted information.