Abstract:
Systems and techniques of voice personalization for machine reading are described herein. A message with textual content may be received. A sender of the message may be identified. A voice model that corresponds to the sender may be identified. An audio representation of the textual content may be rendered using the voice model.
Abstract:
Methods and systems are disclosed using an execution pipeline on a multi-processor platform for deep learning network execution. In one example, a network workload analyzer receives a workload, analyzes a computation distribution of the workload, and groups the network nodes into groups. A network executor assigns each group to a processing core of the multi-core platform so that the respective processing core handle computation tasks of the received workload for the respective group.
Abstract:
Methods, apparatus, systems and articles of manufacture are disclosed to improve deep learning resource efficiency. An example apparatus includes a graph monitor to select a candidate operation node in response to receiving an operation graph, the operation graph including one or more other operation nodes, a node rule evaluator to evaluate the candidate operation node based on an operating principle, the operating principle to determine an output storage destination of the candidate operation node based on a topology of the operation graph, and a tag engine to tag the candidate operation node with a memory tag value based on the determined output storage destination.
Abstract:
Various embodiments are generally directed to techniques for employing a hybrid of sequential and parallel processing to perform random sample and consensus (RANSAC). A device to perform RANSAC includes a derivation component to derive a first set of proposed models in parallel from a first set of minimal sample sets of a data set; and a comparison component to recalculate a required quantity of proposed models to derive an accurate model if a proposed model of the first set of proposed models better fits the data set than any proposed model derived prior to derivation of the first set of proposed models, and to determine whether to derive a second set of proposed models following derivation of the first set of proposed models based on a comparison of the required quantity to a quantity of previously derived proposed models that includes the first set. Other embodiments are described and claimed.
Abstract:
A system for performing single Gaussian skin detection is described herein. The system includes a memory and a processor. The memory is configured to receive image data. The processor is coupled to the memory. The processor is to generate a single Gaussian skin model based on a skin dominant region associated with the image data and a single Gaussian non-skin model based on a second region associated with the image data and to classify individual pixels associated with the image data via a discriminative skin likelihood function based on the single Gaussian skin model and the single Gaussian non-skin model to generate skin label data associated with the image data.
Abstract:
A method is described that performing an image integral calculation by creating a second vector and creating a third vector. The second vector is created by executing a first instruction that adds alternating elements of a first vector to respective neighboring elements of the first vector and presents resulting summations into said second vector. The first instruction also passes through the respective neighboring elements to said second vector. The third vector is created by executing a second instruction that adds elements of one side of the second vector to an element of another side of the second vector and passes through the another side of the second vector.
Abstract:
Various embodiments are generally directed to techniques for employing a hybrid of sequential and parallel processing to perform random sample and consensus (RANSAC). A device to perform RANSAC includes a derivation component to derive a first set of proposed models in parallel from a first set of minimal sample sets of a data set; and a comparison component to recalculate a required quantity of proposed models to derive an accurate model if a proposed model of the first set of proposed models better fits the data set than any proposed model derived prior to derivation of the first set of proposed models, and to determine whether to derive a second set of proposed models following derivation of the first set of proposed models based on a comparison of the required quantity to a quantity of previously derived proposed models that includes the first set. Other embodiments are described and claimed.
Abstract:
Apparatuses, methods and storage medium associated with computing, including processing of image frames, are disclosed herein. In embodiments, an apparatus may include an accelerometer and an image processing engine having an object tracking function. The object tracking function may be arranged to track an object from one image frame to another image frame. The object tracking function may use acceleration data output by the accelerometer to assist in locating the object in an image frame. Other embodiments may be described and claimed.
Abstract:
Skin smoothing is applied to images using a bilateral filter and aided by a skin map. In one example a method includes receiving an image having pixels at an original resolution. The image is buffered. The image is downscaled from the original resolution to a lower resolution. A bilateral filter is applied to pixels of the downscaled image. The filtered pixels of the downscaled image are blended with pixels of the image having the original resolution, and the blended image is produced.
Abstract:
Methods and systems are disclosed using an execution pipeline on a multi-processor platform for deep learning network execution. In one example, a network workload analyzer receives a workload, analyzes a computation distribution of the workload, and groups the network nodes into groups. A network executor assigns each group to a processing core of the multi-core platform so that the respective processing core handle computation tasks of the received workload for the respective group.