Abstract:
An electronic device displays, in a first user interface, a media item that corresponds to a sequence of images in a first display mode, which is one of a plurality of user-selectable display modes for the media item. The device, responsive to detecting an input, displays a display-mode selection user interface that concurrently displays representations of the media item, including a second representation of the media item that corresponds to a second display mode. The device detects an input on the second representation in the plurality of representations of the media item. In response, the device selects a second display mode in the plurality of user-selectable display modes for the media item, which corresponds to the second representation in the plurality of representations of the media item.
Abstract:
The present disclosure generally relates to user interfaces. In some examples, the electronic device transitions between user interfaces for capturing photos based on data received from a first camera and a second camera. In some examples, the electronic device provides enhanced zooming capabilities that result in visual pleasing results for a displayed digital viewfinder and for captured videos. In some examples, the electronic device provides user interfaces for transitioning a digital viewfinder between a first camera with an applied digital zoom to a second camera with no digital zoom. In some examples, the electronic device prepares to capture media at various magnification levels. In some examples, the electronic device enhanced capabilities for navigating through a plurality of values.
Abstract:
An electronic device, detects a first input while in a display-off state, and in response, the device activates the display of the device. The device further displays, a first user interface that corresponds to a display-on state of the device. While displaying the first user interface that corresponds to the display-on state of the device, the device detects a swipe gesture, in accordance with a determination that the device is in a locked mode of the display-on state and the swipe gesture is in a first direction, the device replaces display of the first user interface with display of a second user interface displaying a first content; and in accordance with a determination that the device is in an unlocked mode of the display-on state and the swipe gesture is in the first direction, the device replaces display of the first user interface with display of the second user interface.
Abstract:
An electronic device, detects a first input while in a display-off state, and in response, the device activates the display of the device. The device further displays, a first user interface that corresponds to a display-on state of the device. While displaying the first user interface that corresponds to the display-on state of the device, the device detects a swipe gesture, in accordance with a determination that the device is in a locked mode of the display-on state and the swipe gesture is in a first direction, the device replaces display of the first user interface with display of a second user interface displaying a first content; and in accordance with a determination that the device is in an unlocked mode of the display-on state and the swipe gesture is in the first direction, the device replaces display of the first user interface with display of the second user interface.
Abstract:
A method includes: displaying a first view of a first application; detecting a first portion of a first input; if the first portion of the first input meets application-switching criteria, concurrently displaying portions of the first application view and a second application view; while concurrently displaying the portions of the application views, detecting a second portion of the first input; if the second portion of the first input meets first-view display criteria (liftoff of contact detected in a first region), ceasing to display the portion of the second application view and displaying the first application view; and if the second portion of the first input meets multi-view display criteria (liftoff of contact detected in a second region), maintaining concurrent display of a portion of the first application view and a portion of the second application view on the display after detecting the liftoff of the contact.
Abstract:
Systems and methods for proactively identifying and surfacing relevant content on an electronic device with a touch-sensitive display are disclosed herein. In one aspect, the method includes executing, on the electronic device, an application in response to an instruction from a user of the electronic device. While executing the application, the method further includes collecting usage data. The usage data at least includes one or more actions performed by the user within the application. The method also includes: automatically, without human intervention, obtaining at least one trigger condition based on the collected usage data and associating the at least one trigger condition with a particular action of the one or more actions performed by the user within the application. Upon determining that the at least one trigger condition has been satisfied, the method includes providing an indication to the user that the particular action associated with the trigger condition is available.
Abstract:
An electronic device with a touch-sensitive surface, a display, and one or more sensors to detect intensity of contacts: displays a plurality of user interface objects in a first user interface; detects a contact while a focus selector is at a location of a first user interface object; and, while the focus selector is at the location of the first user interface object: detects an increase in a characteristic intensity of the contact to a first intensity threshold; in response, visually obscures the plurality of user interface objects, other than the first user interface object, while maintaining display of the first user interface object; detects that the characteristic intensity of the contact continues to increase above the first intensity threshold; and, in response, dynamically increases the amount of visual obscuring of the plurality of user interface objects, other than the first user interface object.
Abstract:
At an electronic device with a touch-sensitive display, a remote camera control user interface may be displayed. In some examples, a user may provide input through a gesture at a location corresponding to the touch-sensitive display and/or through a rotation of a rotatable input mechanism to control a camera of an external device. Camera control may include control of the external device's camera features, including image capture, zoom settings, focus settings, flash settings, and timer settings, for example, and may also include access to the external device's library of previously captured images.
Abstract:
Techniques for a displaying user interfaces screens of a calendar application include displaying different screens based on an input modality. The calendar application may respond differently to inputs from a touch- sensitive screen, inputs from a rotatable input mechanism, inputs having higher intensities, inputs having lower intensities, and so forth.
Abstract:
An electronic device with a touch-sensitive surface and display can execute a messaging application. The messaging application provides options for sending a message with a large attachment. In one option it allows for sending a message with a large attachment by uploading and storing the attachment on a cloud server, embeds a link to the storage location in the message, and sends the message without the attachment. The messaging application may also include a UI element in the message that includes an indicator about the status of the stored attachment. Furthermore, the messaging application may embed in the message a smaller sized version of the attachment before sending the message. The status indicator may display whether the link to the storage location has expired or whether the attachment has previously been retrieved from the cloud server.