Recognition and processing of gestures in a graphical user interface using machine learning
Abstract:
In an embodiment, a computer-implemented method comprises: displaying a continuous content stream of individually actionable content items; automatically recognizing, while the continuous content stream is being displayed, a mode change from a control mode to a signal mode; receiving a touch input after the mode change is recognized and, in response, using a neural network to generate output data indicating a gesture classification for the touch input, wherein the touch input is received in relation to a particular actionable content item that is in a visible portion of the continuous content stream; performing, according to the output data, an action for the particular actionable content item; wherein the method is performed by one or more computing devices.
Information query
Patent Agency Ranking
0/0