Abstract:
Systems and methods are provided for presenting content to a user (202). An exemplary method involves establishing (302, 404, 406) a relationship between a first device (204) and the user (202), wherein, based on the relationship, one or more instances of secondary content are automatically excluded (306) from display by the first device (204) while primary content is displayed (230, 412) by the first device (204). The method continues by presenting (240, 308, 416) an instance of secondary content to the user (202) in a manner that is influenced by the relationship.
Abstract:
Pixels in a provided image for which the content has been provided (301) in error are identified (302). This image content is processed (303) to provide a version of the image wherein the error is at least partially concealed while also creating (304) ancillary information regarding the errored pixel(s) and the spatial location to which such pixel(s) corresponds to thereby provide a record that describes which pixels in the image content were provided in error. An optional user-selectable option (305) can permit displaying either of the aforementioned corrected version of the image wherein the error is at least partially concealed and a version of the image wherein the ancillary information is used to depict the errored pixel(s) such that provided-in-error pixels are readily distinguished from correctly-provided pixels.
Abstract:
A first video signal processor (103) receives a first encoded video signal from which a video unit (201) generates a second encoded video signal, where the second encoded video signal is a reduced data rate version of the first encoded video signal. An error encoder (203) generates error redundancy data for the second encoded video signal and a multiplexer (207) generates output video data comprising the first encoded video signal and the error correcting data but not comprising the second encoded video signal. A second video processor (105) receives the output video data and a video unit (303) regenerates the second video signal from the first video signal. An error unit (305) detects errors for at least a first segment of the second video signal in response to the error redundancy data. A combiner (307) then generates combined video data by combining corresponding segments of the first encoded video signal and the second encoded video signal.
Abstract:
Pixels in a provided image for which the content has been provided (301) in error are identified (302). This image content is processed (303) to provide a version of the image wherein the error is at least partially concealed while also creating (304) ancillary information regarding the errored pixel(s) and the spatial location to which such pixel(s) corresponds to thereby provide a record that describes which pixels in the image content were provided in error. An optional user-selectable option (305) can permit displaying either of the aforementioned corrected version of the image wherein the error is at least partially concealed and a version of the image wherein the ancillary information is used to depict the errored pixel(s) such that provided-in-error pixels are readily distinguished from correctly-provided pixels.
Abstract:
Disclosed is an image encoder that divides (1000) a digital image into a set of "macroblocks." If appropriate, a macroblock is "downsampled" (1004) to a lower resolution. The lower-resolution macroblock is then encoded by applying spatial (and possibly temporal) prediction (1006). The "residual" of the macroblock is calculated (1010) as the difference between the predicted and actual contents of the macroblock. The low-resolution residual is then either transmitted to an image decoder or stored for later use (1010). In some embodiments, the encoder calculates (1008) the rate-distortion costs of encoding the original-resolution macroblock and the lower-resolution macroblock and then only encodes (1010) the lower-resolution macroblock if its cost is lower. When a decoder receives (1104) a lower-resolution residual, it recovers the lower-resolution macroblock using standard prediction techniques (1106). Then, the macroblock is "upsampled" (1110) to its original resolution by interpolating the values left out by the encoder. The macroblocks are then joined (1114) to form the original digital image.
Abstract:
A device for use with a frame generating portion that is arranged to receive picture data corresponding to a plurality of pictures and to generate encoded video data for transmission across a transmission channel having an available bandwidth. The frame generating portion can generate a frame for each of the plurality of pictures to create a plurality of frames. The encoded video data is based on the received picture data. The device includes a distortion estimating portion and inclusion determining portion and an extracting portion. The distortion estimating portion can estimate a distortion. The inclusion determining portion can establish an inclusion boundary based on the estimated distortion. The extracting portion can extract a frame from the plurality of frames based on the inclusion boundary.
Abstract:
A method and apparatus for providing communication between a sending terminal and one or more receiving terminals in a communication network. The media content of a signal transmitted by the sending terminal is detected and one or more of a voice stream, an avatar control parameter stream and a video stream are generated from the media content. At least one of the voice stream, the avatar control parameter stream and the video stream are selected as an output to be transmitted to the receiving terminal. The network server may be operable to generate synthetic video from the voice input, a natural video input and/or incoming avatar control parameters. Figure 7 is a flow chart of a method for providing hybrid audio visual communication consistent with some embodiments of the invention.
Abstract:
A scalable video compression system (100) having an encoder (120), bit extractor (140), and decoder (160) for efficiently encoding and decoding a scalable embedded bitstream (130) at different video resolution, framerate, and video quality levels is provided. Bits can be extracted in order of refinement layer (136), followed by temporal level (132), followed by spatial layer (134), wherein each bit extracted provides an incremental improvement in video decoding quality. Bit extraction can be truncated at a position in the embedded bitstream corresponding to a maximum refinement layer, a maximum temporal level, and a maximum spatial layer. For a given refinement layer, bits are extracted from all spatial layers in a lower temporal level prior to extracting bits from spatial layers in a higher temporal level for prioritizing coding gain to increase video decoding quality, and prior to moving to a next refinement layer.
Abstract:
A system (100) and method (200) for efficient video adaptation of an input video (102) is provided. The method can include segmenting (210) the input video into a plurality of video shots (142) using a video trace (111) to exploit a temporal structure of the input video, selecting (220) a subset of frames (144) for the video shots that minimizes a distortion of adapted video (152) using the video trace, and selecting transcoding parameters (122) for the subset of frames to produce an optimal video quality of the adapted video under constraints of frame rate, bit rate, and viewing time constraint. The video trace is a compact representation for temporal and spatial distortions for frames in the input video. A spatio-temporal rate-distortion model (320) provides selection of the transcoding parameters during adaptation.
Abstract:
Disclosed is an image encoder that divides (1000) a digital image into a set of "macroblocks." If appropriate, a macroblock is "downsampled" (1004) to a lower resolution. The lower-resolution macroblock is then encoded by applying spatial (and possibly temporal) prediction (1006). The "residual" of the macroblock is calculated (1010) as the difference between the predicted and actual contents of the macroblock. The low-resolution residual is then either transmitted to an image decoder or stored for later use (1010). In some embodiments, the encoder calculates (1008) the rate-distortion costs of encoding the original-resolution macroblock and the lower-resolution macroblock and then only encodes (1010) the lower-resolution macroblock if its cost is lower. When a decoder receives (1104) a lower-resolution residual, it recovers the lower-resolution macroblock using standard prediction techniques (1106). Then, the macroblock is "upsampled" (1110) to its original resolution by interpolating the values left out by the encoder. The macroblocks are then joined (1114) to form the original digital image.