Abstract:
PROBLEM TO BE SOLVED: To provide a multi-pass encoding method that encodes several images (e.g., several frames of a video sequence). SOLUTION: An encoding operation (110) that encodes images is iteratively performed. The encoding operation is based on a nominal quantization parameter, which the method uses to compute quantization parameters for the images (132). During several different iterations of the encoding operation, the method uses several different nominal quantization parameters (125). The method stops its iterations (140) when it reaches a terminating criterion (e.g., it identifies an acceptable encoding of the images). COPYRIGHT: (C)2011,JPO&INPIT
Abstract:
Some embodiments of the invention provide a multi-pass encoding method that encodes several images (e.g., several frames of a video sequence). The method iteratively performs an encoding operation that encodes these images (Figure 1, 110). The encoding operation is based on a nominal quantization parameter, which the method uses to compute quantization parameters for the images (132). During several different iterations of the encoding operation, the method uses several different nominal quantization parameters (125). The method stops its iterations (140) when it reaches a terminating criterion (e.g., it identifies an acceptable encoding of the images).
Abstract:
Techniques for encoding data based at least in part upon an awareness of the decoding complexity of the encoded data and the ability of a target decoder to decode the encoded data are disclosed. In some embodiments, a set of data is encoded based at least in part upon a state of a target decoder to which the encoded set of data is to be provided. In some embodiments, a set of data is encoded based at least in part upon the states of multiple decoders to which the encoded set of data is to be provided.
Abstract:
A framework for providing multi-device collaboration is described herein. In one embodiment, a method for providing multi-device collaboration between first and second devices can include transferring an initializing function call to create a session object. The function call specifies a mode of the session object, a service type, and a service name. The session object can include functions to discover the second device, connect with the second device, and provide data transport between the connected first and second devices. The service name can include a truncated name, a unique identification, and a state of service of a software application associated with the first device. The method can include detecting a network and advertising the service type and the service name via the network. The service type and service name can be advertised prior to establishing the connection between the first and second devices.
Abstract:
A framework for providing multi-device collaboration is described herein. In one embodiment, a method for providing multi-device collaboration between first and second devices can include transferring an initializing function call to create a session object. The function call specifies a mode of the session object, a service type, and a service name. The session object can include functions to discover the second device, connect with the second device, and provide data transport between the connected first and second devices. The service name can include a truncated name, a unique identification, and a state of service of a software application associated with the first device. The method can include detecting a network and advertising the service type and the service name via the network. The service type and service name can be advertised prior to establishing the connection between the first and second devices.
Abstract:
Systems, apparatuses and methods whereby coded bitstreams are delivered to downstream end-user devices having various performance capabilities. A head-end encoder/video store generates a primary coded bitstream and metadata for delivery to an intermediate re-encoding system. The re-encoding system recodes the primary coded bitstream to generate secondary coded bitstreams based on coding parameters in the metadata. Each secondary coded bitstream is matched to a conformance point of a downstream end-user device. Coding parameters for each conformance point can be derived from the head-end encoder encoding original source video to generate the secondary coded bitstreams and extracting information from the coding process/results. The metadata can then can be communicated as part of the primary coded bitstream (e.g., as SEI) or can be communicated separately. As a result, the complexity of the secondary coded bitstream is appropriately scaled to match the capabilities of the downstream end-user device to which it is delivered.
Abstract:
Some embodiments provide an architecture for establishing a multi-participant conference. This architecture has one participant's computer in the conference act as a central content distributor for the conference. The central distributor receives data (e.g., video and/or audio streams) from the computer of each other participant, and distributes the received data to the computers of all participants. In some embodiments, the central distributor receives A/V data from the computers of the other participants. From such received data, the central distributor of some embodiments generates composite data (e.g., composite image data and/or composite audio data) that the central distributor distributes back to the participants. The central distributor in some embodiments can implement a heterogeneous audio/video conference. In such a conference, different participants can participate in the conference differently. For instance, different participants might use different audio or video codecs. Moreover, in some embodiments, one participant might participate in only the audio aspect of the conference, while another participant might participate in both audio and video aspects of the conference.