Abstract:
Systems, articles, and methods for performing gesture identification with improved robustness against variations in use parameters and without requiring a user to undergo an extensive training procedure are described. A wearable electromyography (“EMG”) device includes multiple EMG sensors, an on-board processor, and a non-transitory processor-readable storage medium that stores data and/or processor-executable instructions for performing gesture identification. The wearable EMG device detects, determines, and ranks features in the signal data provided by the EMG sensors and generates a digit string based on the ranked features. The permutation of the digit string is indicative of the gesture performed by the user, which is identified by testing the permutation of the digit string against multiple sets of defined permutation conditions. A single reference gesture may be performed by the user to (re-)calibrate the wearable EMG device before and/or during use.
Abstract:
Systems, articles, and methods perform gesture identification with limited computational resources. A wearable electromyography (“EMG”) device includes multiple EMG sensors, an on-board processor, and a non-transitory processor-readable memory that stores data and/or processor-executable instructions for performing gesture identification. The wearable EMG device detects and determines features of signals when a user performs a physical gesture, and processes the features by performing a decision tree analysis. The decision tree analysis invokes a decision tree stored in the memory, where storing and executing the decision tree may be managed by limited computational resources. The outcome of the decision tree analysis is a probability vector that assigns a respective probability score to each gesture in a gesture library. The accuracy of the gesture identification may be enhanced by performing multiple iterations of the decision tree analysis across multiple time windows of the EMG signal data and combining the resulting probability vectors.
Abstract:
Systems, devices, and methods adapt established concepts from natural language processing for use in gesture identification algorithms. A gesture identification system includes sensors, a processor, and a non-transitory processor-readable memory that stores data and/or instructions for performing gesture identification. A gesture identification system may include a wearable gesture identification device. The gesture identification process involves segmenting signals from the sensors into data windows, assigning a respective “window class” to each data window, and identifying a user-performed gesture based on the corresponding sequence of window classes. Each window class exclusively characterizes at least one data window property and is analogous to a “letter” of an alphabet. Under this model, each gesture is analogous to a “word” made up of a particular combination of window classes.
Abstract:
Systems, articles, and methods perform gesture identification with limited computational resources. A wearable electromyography (“EMG”) device includes multiple EMG sensors, an on-board processor, and a non-transitory processor-readable memory storing data and/or instructions for performing gesture identification. The wearable EMG device detects signals when a user performs a physical gesture and characterizes a signal vector {right arrow over (s)} based on features of the detected signals. A library of gesture template vectors G is stored in the memory of the wearable EMG device and a respective property of each respective angle θi formed between the signal vector {right arrow over (s)} and respective ones of the gesture template vectors {right arrow over (g)}i is analyzed to match the direction of the signal vector {right arrow over (s)} to the direction of a particular gesture template vector {right arrow over (g)}*. The accuracy of the gesture identification may be enhanced by performing multiple iterations across multiple time-synchronized portions of the EMG signal data.
Abstract:
Systems, articles, and methods for performing gesture identification with improved robustness against variations in use parameters and without requiring a user to undergo an extensive training procedure are described. A wearable electromyography (“EMG”) device includes multiple EMG sensors, an on-board processor, and a non-transitory processor-readable storage medium that stores data and/or processor-executable instructions for performing gesture identification. The wearable EMG device detects, determines, and ranks features in the signal data provided by the EMG sensors and generates a digit string based on the ranked features. The permutation of the digit string is indicative of the gesture performed by the user, which is identified by testing the permutation of the digit string against multiple sets of defined permutation conditions. A single reference gesture may be performed by the user to (re-)calibrate the wearable EMG device before and/or during use.
Abstract:
Systems, devices, and methods that implement state machine models in wearable electronic devices are described. A wearable electronic device stores processor-executable gesture identification instructions that, when executed by an on-board processor, enable the wearable electronic device to identify one or more gesture(s) performed by a user. The wearable electronic device also stores processor-executable state determination instructions that, when executed by the processor, cause the wearable electronic device to enter into and transition between various operational states depending on signals detected by on-board sensors. The state machine models described herein enable the wearable electronic devices to identify and automatically recover from operational errors, malfunctions, or crashes with minimal intervention from the user.
Abstract:
Systems, devices, and methods adapt established concepts from natural language processing for use in gesture identification algorithms. A gesture identification system includes sensors, a processor, and a non-transitory processor-readable memory that stores data and/or instructions for performing gesture identification. A gesture identification system may include a wearable gesture identification device. The gesture identification process involves segmenting signals from the sensors into data windows, assigning a respective “window class” to each data window, and identifying a user-performed gesture based on the corresponding sequence of window classes. Each window class exclusively characterizes at least one data window property and is analogous to a “letter” of an alphabet. Under this model, each gesture is analogous to a “word” made up of a particular combination of window classes.
Abstract:
Systems, articles, and methods perform gesture identification with limited computational resources. A wearable electromyography (“EMG”) device includes multiple EMG sensors, an on-board processor, and a non-transitory processor-readable memory storing data and/or instructions for performing gesture identification. The wearable EMG device detects signals when a user performs a physical gesture and characterizes a signal vector {right arrow over (s)} based on features of the detected signals. A library of gesture template vectors G is stored in the memory of the wearable EMG device and a respective property of each respective angle θi formed between the signal vector {right arrow over (s)} and respective ones of the gesture template vectors {right arrow over (g)}i is analyzed to match the direction of the signal vector {right arrow over (s)} to the direction of a particular gesture template vector {right arrow over (g)}*. The accuracy of the gesture identification may be enhanced by performing multiple iterations across multiple time-synchronized portions of the EMG signal data.
Abstract:
Systems, devices, and methods adapt established concepts from natural language processing for use in gesture identification algorithms. A gesture identification system includes sensors, a processor, and a non-transitory processor-readable memory that stores data and/or instructions for performing gesture identification. A gesture identification system may include a wearable gesture identification device. The gesture identification process involves segmenting signals from the sensors into data windows, assigning a respective “window class” to each data window, and identifying a user-performed gesture based on the corresponding sequence of window classes. Each window class exclusively characterizes at least one data window property and is analogous to a “letter” of an alphabet. Under this model, each gesture is analogous to a “word” made up of a particular combination of window classes.
Abstract:
Systems, articles, and methods perform gesture identification with limited computational resources. A wearable electromyography (“EMG”) device includes multiple EMG sensors, an on-board processor, and a non-transitory processor-readable memory that stores data and/or processor-executable instructions for performing gesture identification. The wearable EMG device detects and determines features of signals when a user performs a physical gesture, and processes the features by performing a decision tree analysis. The decision tree analysis invokes a decision tree stored in the memory, where storing and executing the decision tree may be managed by limited computational resources. The outcome of the decision tree analysis is a probability vector that assigns a respective probability score to each gesture in a gesture library. The accuracy of the gesture identification may be enhanced by performing multiple iterations of the decision tree analysis across multiple time windows of the EMG signal data and combining the resulting probability vectors.