Abstract:
The invention describes a method for controlling a display, which method comprises the steps of displaying a portion of a visual presentation (VP) on the display (4) and aiming a pointing device (1) comprising a camera (2) at a target area (A) to indicate a target (PT) in the visual presentation (VP), whereby the target (PT) may be inside or outside of a portion (14) of the visual presentation (VP) currently visible on the display (4). An image (3) of the target area (A) aimed at by the pointing device (1) is generated and interpreted to determine the location of the target (PT) within the visual presentation (VP). The visual presentation (VP) is adjusted as necessary to display that portion of the visual presentation (VP) which encompasses the target (PT). The invention also describes a pointing device (1), a display control interface (8, 8′, 8″) and a system comprising such a pointing device (1) and display control interface (8, 8′, 8″) suitable for applying this method.
Abstract:
The invention describes a method for controlling a display, which method comprises the steps of displaying a portion of a visual presentation (VP) on the display (4) and aiming a pointing device (1) comprising a camera (2) at a target area (A) to indicate a target (PT) in the visual presentation (VP), whereby the target (PT) may be inside or outside of a portion (14) of the visual presentation (VP) currently visible on the display (4). An image (3) of the target area (A) aimed at by the pointing device (1) is generated and interpreted to determine the location of the target (PT) within the visual presentation (VP). The visual presentation (VP) is adjusted as necessary to display that portion of the visual presentation (VP) which encompasses the target (PT). The invention also describes a pointing device (1), a display control interface (8, 8′, 8″) and a system comprising such a pointing device (1) and display control interface (8, 8′, 8″) suitable for applying this method.
Abstract:
The invention describes a method for contesting at least two interactive systems (2A, 2B) against each other where a message (M) of a first of the interactive systems (2A) is output by the first interactive system in form of an audio-visual and/or tactile expression (AE, VE, TE) and where the audio-visual and/or tactile expression (AE, VE, TE) is detected by a second of the interactive systems (2B) as an input signal (ISA, ISV). The input signal (ISA, ISV) is analyzed by the second interactive system to derive a content of the message (M) and, depending on the content of the message (M) and on given competition rules, a reaction of the second system is triggered. Moreover, the invention describes an interactive system (2A, 2B) usable for taking part in a competition according to this method and an interactive system competition arrangement (1) with at least two of such interactive systems (2A, 2B).
Abstract:
The invention relates to a method for recording content on a record medium (2) that contains a desired content descriptor (3), comprising the steps of reading said desired content descriptor (3) from said record medium (2), scanning the content (10, 12) of at least one multimedia source (6, 7) for desired content that matches said desired content descriptor (3), and recording said desired content on said record medium (3). Said record medium (2) is preferably a Digital Versatile Disc (DVD), said desired content descriptor (3) is preferably a keyword contained in a blank of said DVD, and said at least one multimedia source (6, 7) is preferably a television receiver. The DVD with the keyword contained therein thus triggers the recording of content from the television receiver that matches said keyword on said DVD. The invention further relates to a computer program product, a device and a record medium.
Abstract:
A method for transmitting a user-specific program to the user of a program content transmission system (1) is described, in which first a part of the program contents (P) of the program is transmitted to a first terminal unit (A) of the user and the program transmission to the fist terminal unit (A) is stopped when a first defined event occurs in accordance with a pre-determined procedural sequence. Subsequently, when a second defined event occurs, for the continuation of the program transmission, there is then a further transmission of program contents (P′) of the program to a second terminal unit (B) of the user in accordance with a pre-determined procedural sequence. Moreover, a respective program content transmission system (1) and an terminal unit (A, B) for use in this type of transmission method are described.
Abstract:
The invention relates to a method and a device for the transcription of spoken and written utterances. To this end, the utterances undergo speech or text recognition, and the recognition result (ME) is combined with a manually created transcription (MT) of the utterances in order to obtain the transcription. The additional information rendered usable by the combination as a result of the recognition result (ME) enables the transcriber to work relatively roughly and therefore quickly on the manual transcription. When using a keyboard (25), he can, for example, restrict himself to hitting the keys of only one row and/or can omit some keystrokes completely. In addition, the manual transcribing can also be accelerated by the suggestion of continuations (31) to the text input so far (30), which continuations are anticipated by virtue of the recognition result (ME).
Abstract:
A system and a method are described for generating sequences of audio or video contents (A . . . I). Contents (A . . . I) are available in a stored or otherwise readable form. Additional data (for example, genre, type, time, date, play time, costs, etc.) are provided together with the contents (A . . . I). While taking a user profile (P) into account, a play sequence (S) is composed from the contents (A . . . I), using selection means (20). By matching the additional data with selection criteria of the user profile (P), a content evaluation number is determined first. For play sequences of a plurality of contents (A . . . I), a sequence evaluation number is then determined, in which the content evaluation numbers of the contents (A . . . I) arranged therein and preferably also correlation values between the contents and/or costs for requesting the contents are taken into account. A sequence (S) is selected in accordance with its sequence evaluation number.
Abstract:
The method enables a user of a client (2) to invoke predefined information units in a communications network per speech input. For this purpose, a client (2) downloads a private information unit (27) that enables a speech input from a server (6), a speech recognizer (8) produces a recognition result from an uttered speech input and with the recognition result a link (44-46, 48) to an information unit is determined in a data file (5) to which information unit a word (41-43, 47) is assigned that correlates with the recognition result. Furthermore, with a method of implementing a speech input possibility in private information units (27) for the speech-based navigation in a communications network (4), a registration information unit (19) is downloaded from a server (6) by means of a client (1), by means of which registration information unit (19) user-specific links (46) are assigned to predefined words (41-43) and the assignment (25, 26) with a user identifier (IDn) is transmitted to a data file (5) and the user identifier (IDn) and an address of a speech recognizer (8), which can each be combined with a private information unit (27), are transmitted to the client (1).
Abstract:
A distributed pattern recognition system includes at least one user station and a server station. The server station and the user station are connected via a network, such as Internet. The server station includes different recognition models of a same type. As part of a recognition enrolment, the user station transfers model improvement data associated with a user of the user station to the server station. The server station selects a recognition model from the different recognition models of a same type in dependence on the model improvement data. For each recognition session, the user station transfers an input pattern representative of time sequential input generated by the user to the server station. The server station retrieves a recognition model selected for the user and provides the retrieved recognition model to a recognition unit for recognising the input pattern using the recognition models.
Abstract:
For speech recognition a new word is represented as based on a stored inventory of models of sub-word units. First a plurality of utterances are presented that all should conform to the word. For building a word model from the utterances, these are represented by a sequence of feature vectors. First, the utterances are used to train a whole-word model that is independent of the models of the sub-word units. The length of the whole-word model equals the average length of the utterances. Next, a sequence of Markov states and associated probability densities of acoustic events of the whole-word model is interpreted as a reference template represented by a string of averaged feature vectors. Finally, the string is recognized by matching to models in the inventory and storing a recognition result as a model of the utterances.