Abstract:
Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. A video summary can be generated including one or more of the identified best scenes. The video summary can be generated using a video summary template with slots corresponding to video clips selected from among sets of candidate video clips. Best scenes can also be identified by receiving an indication of an event of interest within video from a user during the capture of the video. Metadata patterns representing activities identified within video clips can be identified within other videos, which can subsequently be associated with the identified activities.
Abstract:
Inducing a particular behavior/process among a set of pre-determined behaviors/processes by implicitly signaling the selection of the particular behavior/process though a particular combination of specific values of said set of information data driven the process. Typically, enables re-using parameters already carried with a signal to signal how those parameters are used by a specific process.
Abstract:
Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. A video summary can be generated including one or more of the identified best scenes. The video summary can be generated using a video summary template with slots corresponding to video clips selected from among sets of candidate video clips. Best scenes can also be identified by receiving an indication of an event of interest within video from a user during the capture of the video. Metadata patterns representing activities identified within video clips can be identified within other videos, which can subsequently be associated with the identified activities.
Abstract:
To allow re-producing, in another image processing apparatus, contents of image processing in an image processing apparatus, a provisional color grading apparatus 300 determines normalizing points (CodeValues that serve as references for normalizing) of input image signals according to format information and normalizes the input image signals. The provisional color grading apparatus 300 records values (normalizing information) obtained by converting the normalizing points into numerical values independent of devices, in association with parameters of color grading.
Abstract:
A method and computing device for providing Augmented Reality (AR) is provided. The method of providing AR includes detecting at least one physical object from a real scene obtained through a camera of a computing device, rendering at least one virtual object at a desired position of the detected at least one physical object on the real scene provided on a display, enabling communication through a command for interaction between the rendered at least one virtual object, and enabling the at least one virtual object to perform an action in response to command communication between the at least one virtual object.
Abstract:
Video and corresponding metadata is accessed. Events of interest within the video are identified based on the corresponding metadata, and best scenes are identified based on the identified events of interest. A video summary can be generated including one or more of the identified best scenes. The video summary can be generated using a video summary template with slots corresponding to video clips selected from among sets of candidate video clips. Best scenes can also be identified by receiving an indication of an event of interest within video from a user during the capture of the video. Metadata patterns representing activities identified within video clips can be identified within other videos, which can subsequently be associated with the identified activities.
Abstract:
An architecture includes a system to create an augmented reality environment in which images are projected onto a scene and user movement within the scene is captured. In addition to primary visual stimuli, the architecture further includes introduction of a secondary form of sensory feedback into the environment to enhance user experience. The secondary sensory feedback may be tactile feedback and/or olfactory feedback. The secondary sensory feedback is provided to the user in coordination with the visual based activity occurring within the scene.
Abstract:
An device, method and program may properly perform gamut conversion of content and be applied to a gamut conversion device. A restoration conversion state confirming unit performs confirmation such as gamut conversion state of image data read out from an optical disc and the existence or not of restoration metadata. An information exchange unit communicates with an output device via a communication unit and performs information exchange such as the existence or not of restoration processing functionality and gamut conversion functionality and the like. A determining unit determines whether or not restoration processing is performed with a playing device based on information obtained by the restoration conversion state confirming unit and the information exchange unit. Similarly, the determining unit determines whether or not to perform gamut conversion processing with the playing device based on information obtained by the restoration conversion state confirming unit and the information exchange unit.
Abstract:
A proof information processing apparatus adds a plurality of types of annotative information to a proof image by use of a plurality of input modes for inputting respective different types of annotative information. A proof information processing method is carried out by using the proof information processing apparatus. A recording medium stores a program for performing the functions of the proof information processing apparatus. An electronic proofreading system includes the proof information processing apparatus and a remote server. At least one of input modes including a text input mode, a stylus input mode, a color information input mode, and a speech input mode is selected depending on characteristics of an image in a region of interest which is indicated.
Abstract:
Methods and systems for comparing and organizing color themes and word tag associations. One embodiment comprises a method for determining associated color themes based on an identified color theme by determining the distance between the identified color theme and each color theme of the collection of color themes, wherein each distance includes a color-based distance and the determined subset of associated color themes from the collection is based at least in part on the calculated distances from the identified color theme. Another embodiment comprises a method that allows an application to suggest tags for an identified color theme based on its similarity to color themes and associated tags of the color theme collection. Another embodiment suggests color themes based on an identified tag, and yet another embodiment suggests tags based on an identified tag.