Personalized gesture recognition for user interaction with assistant systems
Abstract:
In one embodiment, a method includes accessing a plurality of input tuples associated with a first user from a data store, wherein each input tuple comprises a gesture-input and a corresponding speech-input, determining a plurality of intents corresponding to the plurality of speech-inputs, respectively, by a natural-language understanding (NLU) module, generating a plurality of feature representations for the plurality of gesture-inputs based on one or more machine-learning models, determining a plurality of gesture identifiers for the plurality of gesture-inputs, respectively, based on their respective feature representations, associating the plurality of intents with the plurality of gesture identifiers, respectively, and training a personalized gesture-classification model for the first user based on the plurality of feature representations of their respective gesture-inputs and the associations between the plurality of intents and their respective gesture identifiers.
Information query
Patent Agency Ranking
0/0