Abstract:
A speech input method editor can include a speech toolbar (102) having at least a microphone state/toggle button (104). The speech input method editor can also include a selectable dictation window area (108) used as a temporary dictation target until dictation text is transferred to a target application and a selectable correction window area (112) having at least one among an alternate list (120) for correcting dictated words, an alphabet (114), a spacebar (116), a spell mode reminder (1 t 8), or a virtual keyboard (122). The speech input method editor can remain active while using the selectable correction window and while transferring dictation text to the target application. The speech input method editor can further include an alternate input method editor window (I 12b) used to allow non-speech editing into at least one among the dictation window or to the target application while using the speech input method editor.
Abstract translation:语音输入法编辑器可以包括具有至少麦克风状态/切换按钮(104)的语音工具栏(102)。 语音输入法编辑器还可以包括用作临时听写目标的可选听写窗口区域(108),直到听写文本被传送到目标应用程序,并且可选修正窗口区域(112)具有备用列表(120 ),用于校正指定词,字母表(114),空格键(116),拼写模式提醒(1 t 8)或虚拟键盘(122)。 语音输入法编辑器可以在使用可选择的校正窗口和将录音文本传送到目标应用程序时保持有效。 语音输入法编辑器还可以包括用于在使用语音输入法编辑器时将录音窗口或目标应用程序中的至少一个进行非语音编辑的替代输入法编辑器窗口(I12b)。
Abstract:
A method for discriminating between an instance of a voice command and an instance of speech dictation can include identifying a focus point in a user interface; defining a surrounding region about the focus point; identifying user interface objects in the surrounding region; further identifying among the identified user interface objects those user interface objects which are configured to accept speech dictated text and those user interface objects which are not configured to accept speech dictated text; computing a probability based upon those user interface objects which have been further identified as being configured to accept speech dictated text and those user interface objects which have been further identified as not being configured to accept speech dictated text; receiving speech input; and, biasing a determination of whether the speech input is a voice command or speech dictation based upon the computed probability.
Abstract:
A method for discriminating between an instance of a voice command and an instance of speech dictation can include identifying a focus point in a user interface; defining a surrounding region about the focus point; identifying user interface objects in the surrounding region; further identifying among the identified user interface objects those user interface objects which are configured to accept speech dictated text and those user interface objects which are not configured to accept speech dictated text; computing a probability based upon those user interface objects which have been further identified as being configured to accept speech dictated text and those user interface objects which have been further identified as not being configured to accept speech dictated text; receiving speech input; and, biasing a determination of whether the speech input is a voice command or speech dictation based upon the computed probability.
Abstract:
A method for discriminating between an instance of a voice command and an instance of speech dictation can include identifying a focus point in a user interface; defining a surrounding region about the focus point; identifying user interface objets in the surrounding region; further identifying among t he identified user interface objects those user interface objects which are configured to accept speech dictated text and those user interface objects which are not configured to accept speech dictated text; computing a probability based upon those user interface objects which have been further identified as being configured to accept speech dictated text and those user interface objets which have been further identified as not being configured to accept speech dictated text; receiving speech input; and, biasing a determination of whether the speech input is a voice command or speech dictation based upon the computed probability. Additionally, the method can include identifying a focus point outside of the user interface; and, biasin g a determination of whether the speech input is a voice command or speech dictation based upon a default probability.
Abstract:
A method for discriminating between an instance of a voice command and an instance of speech dictation can include identifying a focus point in a user interface; defining a surrounding region about the focus point; identifying user interface objects in the surrounding region; further identifying among the identified user interface objects those user interface objects which are configured to accept speech dictated text and those user interface objects which are not configured to accept speech dictated text; computing a probability based upon those user interface objects which have been further identified as being configured to accept speech dictated text and those user interface objects which have been further identified as not being configured to accept speech dictated text; receiving speech input; and, biasing a determination of whether the speech input is a voice command or speech dictation based upon the computed probability.
Abstract:
REIVINDICACIONES 1. Un método para buscar un texto coincidente en un documento electrónico que comprende: identificar un punto de foco de la mirada de un usuario en una interfaz de usuario; definir una región circundante alrededor de dicho punto de foco, incluyendo dicha región circundante un cuerpo de texto dentro de un objeto de interfaz de usuario configurado para recibir un texto dictado por voz; recibir una orden de voz para seleccionar el texto especificado dentro del documento electrónico; y buscar en dicho cuerpo de texto incluido en la región circundante una coincidencia con dicho texto especificado, limitando dicha búsqueda a dicho cuerpo de texto de dicha región circundante.
Abstract:
A speech input method editor can include a speech toolbar (102) having at least a microphone state/toggle button (104). The speech input method editor can also include a selectable dictation window area (108) used as a temporar y dictation target until dictation text is transferred to a target application and a selectable correction window area (112) having at least one among an alternate list (120) for correcting dictated words, an alphabet (114), a spacebar (116), a spell mode reminder (1 t 8), or a virtual keyboard (122). The speech input method editor can remain active while using the selectable correction window and while transferring dictation text to the target application. The speech input method editor can further include an alternate input method editor window (I 12b) used to allow non-speech editing into at least one among the dictation window or to the target application while usin g the speech input method editor.
Abstract:
A method for discriminating between an instance of a voice command and an instance of speech dictation can include identifying a focus point in a user interface; defining a surrounding region about the focus point; identifying user interface objects in the surrounding region; further identifying among the identified user interface objects those user interface objects which are configured to accept speech dictated text and those user interface objects which are not configured to accept speech dictated text; computing a probability based upon those user interface objects which have been further identified as being configured to accept speech dictated text and those user interface objects which have been further identified as not being configured to accept speech dictated text; receiving speech input; and, biasing a determination of whether the speech input is a voice command or speech dictation based upon the computed probability.