Abstract:
An electronic device with a touch-sensitive surface, a display, and one or more sensors to detect intensity of contacts: displays a plurality of user interface objects in a first user interface; detects a contact while a focus selector is at a location of a first user interface object; and, while the focus selector is at the location of the first user interface object: detects an increase in a characteristic intensity of the contact to a first intensity threshold; in response, visually obscures the plurality of user interface objects, other than the first user interface object, while maintaining display of the first user interface object; detects that the characteristic intensity of the contact continues to increase above the first intensity threshold; and, in response, dynamically increases the amount of visual obscuring of the plurality of user interface objects, other than the first user interface object.
Abstract:
An electronic device provides, to a display, data to present a user interface with a plurality of user interface objects that includes a first user interface object and a second user interface object. A current focus is on the first user interface object. The device receives an input that corresponds to a request to move the current focus; and, in response, provides, to the display, data to: move the first user interface object from a first position towards the second user interface object and/or tilt the first user interface object from a first orientation towards the second user interface object; and, after moving and/or tilting the first user interface object, move the current focus from the first user interface object to the second user interface object, and move the first user interface object back towards the first position and/or tilt the first user interface object back towards the first orientation.
Abstract:
An electronic device includes a camera. While in a first media acquisition mode for the camera the device displays a live preview on the display. While displaying the live preview, the device detects activation of a shutter button. In response to detecting activation of the shutter button, the device groups a plurality of images acquired by the camera in temporal proximity to the activation of the shutter button into a sequence of images. The sequence of images includes: a plurality of images acquired by the camera prior to detecting activation of the shutter button; a representative image that represents the first sequence of images and was acquired by the camera after one or more of the other images in the first sequence of images; and a plurality of images acquired by the camera after acquiring the representative image. The application further deals with different ways of navigating in the sequence of image, e.g via dragging or pressure detection of touch input.
Abstract:
A computer system displays a first user interface object with a first appearance at a first position in a first view of a three-dimensional environment that is at least partially shared between a first and second user. While displaying the first user interface object, the computer system detects a first user input by the first user. In response to detecting the first user input: in accordance with a determination that the second user is not currently interacting with the first user interface object, the computer system performs a first operation; and in accordance with a determination that the second user is currently interacting with the first user interface object, the computer system displays a visual indication, that the first user interface object is not available for interaction, including changing an appearance or position of the first user interface object, and forgoes performing the first operation.
Abstract:
A computing system displays, via a first display generation component, a first computer-generated environment and concurrently displays, via a second display generation component: a visual representation of a portion of a user of the computing system who is in a position to view the first computer-generated environment via the first display generation component, and one or more graphical elements that provide a visual indication of content in the first computer-generated environment. The computing system changes the visual representation of the portion of the user to represent changes in an appearance of the user over a respective period of time and changes the one or more graphical elements to represent changes in the first computer-generated environment over the respective period of time.
Abstract:
An electronic device: displays a home button configuration user interface with a plurality of different tactile output settings for the home button. While displaying the home button configuration user interface, the device detects selection of a respective tactile output setting. In response to detecting a first input of a first type on the home button (while the respective tactile output setting is selected), the device determines whether the respective tactile output setting is a first or a second tactile output setting for the home button. If the respective tactile output setting is the first tactile output setting, the device provides a first tactile output without dismissing the home button configuration user interface. If the respective tactile output setting is the second tactile output setting, the device provides a second tactile output without dismissing the home button configuration user interface.
Abstract:
A computer system, while displaying a three-dimensional computer-generated environment, detects a first event that corresponds to a request to present first computer-generated content, and in response: in accordance with a determination that the first event corresponds to a respective request to present the first computer-generated content with a first level of immersion, the computer system displays the first visual content and outputs the first audio content using a first audio output mode; and in accordance with a determination that the first event corresponds to a respective request to present the first computer-generated content with a second level of immersion different from the first level of immersion, the computer system displays the first visual content and outputs the first audio content using a second audio output mode different from the first, which changes a level of immersion of the first audio content.
Abstract:
A computer system detects a wrist, and in accordance with a determination that first criteria are met, the computer system displays a plurality of representations corresponding to different applications in a first region. The computer system detects a first input at a first location on the wrist that meets predetermined selection criteria. In accordance with a determination that the first location corresponds to a first portion of the wrist and that at least a portion of a palm is facing toward a viewpoint, the computer system causes a display generation component to display a user interface of the first application. In accordance with a determination that the first location corresponds to the second portion of the wrist and that at least a portion of the palm is facing toward the viewpoint, the computer system causes the display generation component to display a user interface of the second application.
Abstract:
Methods and apparatus organize a plurality of haptic output variations into a cohesive semantic framework that uses various information about the alert condition and trigger, application context, and other conditions to provide a system of haptic outputs that share characteristics between related events. In some embodiments, an event class or application class provides the basis for a corresponding haptic output. In some embodiments, whether an alert-salience setting is on provides the basis for adding an increased salience haptic output to the standard haptic output for the alert. In some embodiments, consistent haptics provide for branding of the associated application class, application, and/or context.