Abstract:
An electronic device outputs a first caption of a plurality of captions while a first segment of a video is being played, where the first video segment corresponds to the first caption. While outputting the first caption, the device receives a first user input. In response to receiving the first user input, the device determines a second caption in the plurality of captions, distinct from the first caption, that meets predefined caption selection criteria; determines a second segment of the video that corresponds to the second caption; sends instructions to change from playing the first segment of the video to playing the second segment of the video; and outputs the second caption.
Abstract:
The present disclosure generally relates to communicating between computer systems, and more specifically to techniques for communicating user interface content.
Abstract:
An electronic device with a display and a touch-sensitive surface displays, on the display, a first visual indicator that corresponds to a virtual touch. The device receives a first input from an adaptive input device. In response to receiving the first input from the adaptive input device, the device displays a first menu on the display. The first menu includes a virtual touches selection icon. In response to detecting selection of the virtual touches selection icon, a menu of virtual multitouch contacts is displayed.
Abstract:
The present disclosure generally relates to providing time feedback on an electronic device, and in particular to providing non-visual time feedback on the electronic device. Techniques for providing non-visual time feedback include detecting an input and, in response to detecting the input, initiating output of a first type of non-visual indication of a current time or a second type of non-visual indication of the current time based on the set of non-visual time output criteria met by the input. Techniques for providing non-visual time feedback also include, in response to detecting that a current time has reached a first predetermined time of a set of one or more predetermined times, outputting a first non-visual alert or a second non-visual alert based on a type of watch face that the electronic device is configured to display.
Abstract:
An electronic device includes a display, a rotatable input mechanism, one or more processors, and memory. The electronic device displays content on the display and detects a first user input. In response to detecting the first user input, the electronic displays an enlarged view of the content that includes displaying an enlarged first portion of the content without displaying a second portion of the content. While displaying the enlarged view of the enlarged first portion of the content, in response to detecting a rotation of the rotatable input mechanism, the electronic device performs different tasks based on the operational state of the electronic device.
Abstract:
While an electronic device with a display and a touch-sensitive surface is in a screen reader accessibility mode, the device displays an application launcher screen including a plurality of application icons. A respective application icon corresponds to a respective application stored in the device. The device detects a sequence of one or more gestures on the touch-sensitive surface that correspond to one or more characters. A respective gesture that corresponds to a respective character is a single finger gesture that moves across the touch-sensitive surface along a respective path that corresponds to the respective character. The device determines whether the detected sequence of one or more gestures corresponds to a respective application icon of the plurality of application icons, and, in response to determining that the detected sequence of one or more gestures corresponds to the respective application icon, performs a predefined operation associated with the respective application icon.
Abstract:
The present disclosure generally relates to techniques and interfaces for generating synthesized speech outputs. For example, a user interface for a text-to-speech service can include ranked and/or categorized phrases, which can be selected to enter as text. A synthesized speech output is then generated to deliver any entered text, for example, using a personalized voice model.
Abstract:
The present disclosure generally relates to detecting text. The present disclosure describes at least methods for managing a text detection mode, identifying targeted text, and managing modes of a computer system.