Abstract:
Systems and processes for operating an intelligent dictation system based on gaze are provided. An example method includes, at an electronic device having one or more processors and memory, detecting a gaze of a user, determining based on the detected gaze of the user, whether to enter a dictation mode, and in accordance with a determination to enter the dictation mode: receiving an utterance; determining, based on the detected gaze of the user and the utterance, whether to enter an editing mode; and in accordance with a determination not to enter the editing mode, displaying a textual representation of the utterance on a screen of the electronic device.
Abstract:
Disclosed herein are systems and methods that enable low-vision users to interact with touch- sensitive secondary displays. An example method includes: displaying, on a primary display, a first user interface for an application and displaying, on a touch-sensitive secondary display, a second user interface that includes a plurality of application-specific affordances that control functions of the application. Each respective affordance is displayed with a first display size. The method also includes: detecting, via the secondary display, an input that contacts at least one application-specific affordance. In response to detecting the input and while it remains in contact with the secondary display, the method includes: (i) continuing to display the first user interface on the primary display and (ii) displaying, on the primary display, a zoomed-in representation of the at least one application-specific affordance. The zoomed-in representation is displayed with a second display size that is larger than the first display size.