Abstract:
This disclosure describes a multi-stage tessellation technique for tessellating a curve during graphics rendering. In particular, a first tessellation stage tessellates the curve into a first set of line segments that each represents a portion of the curve. A second tessellation stage further tessellates the portion of the curve represented by each of the line segments of the first set into additional line segments that more finely represent the shape of the curve. In this manner, each portion of the curve that was represented by only one line segment after the first tessellation stage is represented by more than one line segment after the second tessellation stage. In some instances, more than two tessellation stages may be performed to tessellate the curve.
Abstract:
A speech processing system modifies various aspects of input speech according to a user-selected one of various preprogrammed voice fonts. Initially, the speech converter receives a formants signal representing an input speech signal and a pitch signal representing the input signal's fundamental frequency. One or both of the following may also be received: a voicing signal comprising an indication of whether the input speech signal is voiced, unvoiced, or mixed, and/or a gain signal representing the input speech signal's energy. The speech converter also receives user selection of one of multiple preprogrammed voice fonts, each specifying a manner of modifying one or more of the received signals (i.e., formants, voicing, pitch, gain). The speech converter modifies at least one of the formants, voicing, pitch, and/or gain signals as specified by the selectedvoice font.
Abstract:
Generally stated, a method and an accompanying apparatus provides for a voice recognition system with programmable front end processing unit. The front end processing unit requests and receives different configuration files at different times for processing voice data in the voice recognition system. The configuration files are communicated to the front end unit via a communication link for configuring the front end processing unit. A microprocessor may provide the front end configuration files on the communication link at different times.
Abstract:
A method for interactive image editing by an electronic device is described. The method includes detecting at least one facial feature location in an image. The method further includes generating, based on the at least one facial feature location, an image mesh that comprises at least one vertex corresponding to the at least one facial feature location. The method further includes obtaining at least one input from a user, and determining at least one editing action based on the at least one user input, wherein the editing action provides shifting information for at least one vertex of the image mesh and provides a pixel map that maps an image vertex pixel, at a vertex of the image mesh, from a first location in the image, to a second location in an edited image based on the shifting information. The method additionally includes generating the edited image based on the image mesh, the at least one editing action and the image.
Abstract:
An electronic device is described. The electronic device includes a memory. The electronic device also includes a very long instruction word (VLIW) circuit. The VLIW circuit includes an asynchronous memory controller. The asynchronous memory controller is configured to asynchronously access the memory to render different levels of detail. The electronic device may include a non-uniform frame buffer controller configured to dynamically access different subsets of a frame buffer. The different subsets may correspond to the different levels of detail.
Abstract:
Techniques and systems are provided for performing predictive random access using a background picture. For example, a method of decoding video data includes obtaining an encoded video bitstream comprising a plurality of pictures. The plurality of pictures include a plurality of predictive random access pictures. A predictive random access picture is at least partially encoded using inter-prediction based on at least one background picture. The method further includes determining, for a time instance of the video bitstream, a predictive random access picture of the plurality of predictive random access pictures with a time stamp closest in time to the time instance. The method further includes determining a background picture associated with the predictive random access picture, and decoding at least a portion of the predictive random access picture using inter-prediction based on the background picture.
Abstract:
A method for three-dimensional face generation is described. An inverse depth map is calculated based on a depth map and an inverted first matrix. The inverted first matrix is generated from two images in which pixels are aligned vertically and differ horizontally. The inverse depth map is normalized to correct for distortions in the depth map caused by image rectification. A three-dimensional face model is generated based on the inverse depth map and one of the two images.
Abstract:
A method performed by an electronic device is described. The method includes determining a haziness confidence level based on multiple modalities. The method also includes determining whether to perform an action based on the haziness confidence level. The method may include performing the action, including performing haziness reduction based on the haziness confidence level.