Abstract:
In accordance with some embodiments, a method is performed at an electronic device with a display, a touch-sensitive surface, and one or more sensors for detecting intensities of contacts on the touch-sensitive surface. The method comprises displaying a user interface that includes an editable content area that has a plurality of characters, and a content deletion control; detecting a deletion input that includes detecting a contact at a location on the touch-sensitive surface that corresponds to the content deletion control on the display; and in response to detecting the deletion input, deleting content in the editable content area based on a duration and a characteristic intensity of the contact. The method includes in accordance with a determination that the contact was maintained for a first time period without the characteristic intensity of the contact increasing above a first intensity threshold, deleting the content in the editable content area by sequentially deleting a plurality of sub-units of the content of a first type of sub-unit of the content at a rate that does not vary based on the characteristic intensity of the contact; in accordance with a determination that the contact was maintained for a second time period that is longer than the first time period without the characteristic intensity of the contact increasing above the first intensity threshold, switching to deleting the content in the editable content area by sequentially deleting a plurality of sub-units of the content of a second type of sub-unit of the content at a rate that does not vary based on the characteristic intensity of the contact, wherein the second type of sub-unit is different from the first type of sub-unit; and in accordance with a determination that the characteristic intensity of the contact increased above the first intensity threshold, deleting the content in the editable content area by sequentially deleting a plurality of sub-units of the content at a rate that varies based on the characteristic intensity of the contact.
Abstract:
In an exemplary process, while a device is in a locked state, a lock screen interface including a camera icon is displayed on a touch-sensitive display. A gesture is detected on the touch-sensitive display. In response to a determination that the gesture is on the camera icon and meets predetermined activation criteria, the lock screen interface ceases to be displayed and an interface for a camera application displayed. In response to a determination that the gesture starts at a location on the touch-sensitive display other than the camera icon and includes movement in a first direction, the lock screen interface ceases to be displayed and an unlocked user interface with access to a plurality of applications is displayed.
Abstract:
Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display are disclosed herein. In one aspect, the method includes executing, on the electronic device, an application in response to an instruction from a user of the electronic device. While executing the application, the method further includes collecting usage data. The usage data at least includes one or more actions performed by the user within the application. The method also includes: automatically, without human intervention, obtaining at least one trigger condition based on the collected usage data and associating the at least one trigger condition with a particular action of the one or more actions performed by the user within the application. Upon determining that the at least one trigger condition has been satisfied, the method includes providing an indication to the user that the particular action associated with the trigger condition is available.
Abstract:
Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display are disclosed herein. In one aspect, the method includes executing, on the electronic device, an application in response to an instruction from a user of the electronic device. While executing the application, the method further includes collecting usage data. The usage data at least includes one or more actions performed by the user within the application. The method also includes: automatically, without human intervention, obtaining at least one trigger condition based on the collected usage data and associating the at least one trigger condition with a particular action of the one or more actions performed by the user within the application. Upon determining that the at least one trigger condition has been satisfied, the method includes providing an indication to the user that the particular action associated with the trigger condition is available.
Abstract:
Disclosed herein are systems and methods that allow activation of and intuitive interactions with a companion-display mode for an electronic device. An example method includes: receiving an instruction to operate the first electronic device in a companion-display mode in which user interfaces generated by a second electronic device are displayed at the first electronic device, wherein the second electronic device is separate from the first electronic device; in response to receiving the instruction to operate in the companion-display mode: concurrently displaying, on the touch-sensitive display of the first electronic device: a user interface generated by the second electronic device; and a plurality of user interface objects, including a first user interface object associated with a first function of a plurality of functions for controlling the touch-sensitive display of the first electronic device while it is operating in the companion-display mode and a second user interface object associated with a second function of the plurality of functions.
Abstract:
An example process includes: while displaying a user interface different from a digital assistant user interface, receiving a user input; in accordance with a determination that the user input satisfies a criterion for initiating a digital assistant: displaying, over the user interface, the digital assistant user interface, the digital assistant user interface including: a digital assistant indicator displayed at a first portion of the display; and a response affordance displayed at a second portion of the display, where: a portion of the user interface remains visible at a third portion of the display; and the third portion is between the first portion and the second portion.
Abstract:
Electronic devices with improved methods and interfaces for messaging are disclosed, including improved ways to: acknowledge messages; edit previously sent messages; express what a user is trying to communicate; display private messages; synchronize viewing of content between users; incorporate handwritten inputs; quickly locate content in a message transcript; integrate a camera; integrate search and sharing; integrate interactive applications; integrate stickers; make payments; interact with avatars; make suggestions; navigate among interactive applications; manage interactive applications; translate foreign language text; combine messages into a group; and flag messages.