Abstract:
An apparatus for analyzing a video is provided. The video analyzing apparatus comprises a generating unit for generating at least one spatiotemporal pattern by performing pixel sampling on a plurality of frames in an input video; an extracting unit for extracting at least one region of interest having a sinusoidal pattern from the at least one spatiotemporal pattern; and an analysis unit for performing a frequency analysis on the at least one region of interest to determine whether the input video includes a predetermined type of content.
Abstract:
PURPOSE: A device for generating a text collage message and a method thereof are provided to generate a text image corresponding to a text by a text collage message, thereby sending the message to a mobile terminal of a counterpart person. CONSTITUTION: A text recognizing unit(11) recognizes a text of an image. A text image generating unit(12) generates a fixed area including the recognized text by a text image. A control unit(16) outputs texts inputted by an input unit(14) to an output unit(15). The control unit replaces at least some texts corresponding to a text image stored in a storage unit with the text image.
Abstract:
본 발명은 문자의 문자 획 너비 계산을 이용한 문자 영역 추출 장치 및 방법 에 관한 것이다. 이를 위해 본 발명은 원본 영상으로부터 문자 후보 영역을 포함하는 이진화 영상을 생성하고, 문자 후보 영역에서 문자의 외곽선을 추출하여 추출된 외곽선에 대한 문자 외곽선 정보를 획득한 후 획득한 문자 외곽선 정보를 이용하여 외곽선을 구성하는 각 픽셀에서의 대표 문자 획 너비와 대표 문자 각도를 설정하며, 외곽선의 전체 길이에 대비하여 유효한 대표 문자 획 너비와 유효한 각도의 비율을 확인하여 문자 후보 영역에서 문자가 존재하는 영역을 판단함으로써 문자 후보 영역에 실제 문자가 포함되는지 여부를 효과적으로 판단할 수 있게 된다.
Abstract:
Provided is an image processing device which includes a first calculation part which calculates a first position of at least one first point sampled from a real 3D object when a 3D image is acquired, a second calculation part which calculates a second position of at least one second point provided from a reception part by corresponding to the first point, by using at least one second parameter related to the reception part provided from the 3D image, and a decision part which decides at least one first parameter related to a transmission part to be provided to the reception part, by acquiring the 3D image to minimize a difference between the first position and the second position.
Abstract:
PURPOSE: An apparatus and a method for efficient viewer-centric depth adjustment based on virtual front-parallel planar projection in stereoscopic images are provided to rapidly and effectively adjust a depth of an image while not allocating resources to calculation of an accurate binocular disparity map. CONSTITUTION: A depth adjusting unit (120) sets a second point by moving the coordinates of a first point of a first image in a viewer centric direction to correspond to a depth adjustment value. The depth adjusting unit sets a third point by projecting the second point in a direction heading for the second point from the user viewpoint associated with the first image. An image synthesizing unit (130) determines color values of the third point and generates a second image by adjusting the depth of at least a partial region of the first image. [Reference numerals] (110) Plane determining unit; (120) Depth adjusting unit; (130) Image synthesizing unit
Abstract:
PURPOSE: An image summary method using visual features in images and a system thereof are provided to calculate significance for a key frame in the images by using the visual features in the images and to select the key frame based on the significance, thereby summarizing the images. CONSTITUTION: An image summary system detects one or more video shots from images and detects a key frame which shows the detected video shot(S210). The image summary system detects a face from the detected key frame and detects a character after grouping key frames which the face is detected(S220). The image summary system detects a key frame group and calculates significance for each key frame in the detected key frame group(S230). The image summary system selects and extracts the key frame corresponding to the number of requested key frame(S240). [Reference numerals] (AA) Start; (BB) End; (S210) Detect one or more video shots and detect a key frame which represents the detected video shot from images by using a low level feature vector; (S220) Detect a face for a character in the detected key frame, group the detected key frame with the face, and detect the character; (S230) Detect a key frame group including the key frames having a smaller distance between the key frames than a predetermined threshold value and calculate significance for each key frame in the detected key frame group; (S240) Select and extract the key frame corresponding to the number of requested key frame by a user in order of significance