Abstract:
A computing system displays, via a first display generation component, a first computer-generated environment and concurrently displays, via a second display generation component: a visual representation of a portion of a user of the computing system who is in a position to view the first computer-generated environment via the first display generation component, and one or more graphical elements that provide a visual indication of content in the first computer-generated environment. The computing system changes the visual representation of the portion of the user to represent changes in an appearance of the user over a respective period of time and changes the one or more graphical elements to represent changes in the first computer-generated environment over the respective period of time.
Abstract:
In some embodiments, a cursor interacts with user interface objects on an electronic device. In some embodiments, an electronic device selectively displays a cursor in a user interface. In some embodiments, an electronic device displays a cursor while manipulating objects in the user interface. In some embodiments, an electronic device dismisses or switches applications using a cursor. In some embodiments, an electronic device displays user interface elements in response to requests to move a cursor beyond an edge of the display.
Abstract:
The present disclosure generally relates managing display usage. In some embodiments, a device modifies various aspects of a displayed user interface as the device transitions from operating in a first device mode to operating in a second device mode. In some embodiments, the modifications involve altering the content included in a user interface and varying how the content is displayed.
Abstract:
The present disclosure generally relates to selecting text. An example method includes displaying, a focus indicator at a first location; while displaying the focus indicator, detecting a gesture at a first touch location that corresponds to the focus indicator; while detecting the gesture, detecting movement of the gesture to a second touch location; in response to detecting movement of the gesture to the second touch location: in accordance with a determination that the second touch location is in a first direction, moving the focus indicator to a second indicator location; in accordance with a determination that the second touch location is in a second direction, moving the focus indicator to a third location; while the focus indicator is at a fourth location, detecting liftoff of the gesture; and in response to detecting the liftoff, maintaining display of the focus indicator at the fourth location.
Abstract:
The present disclosure generally relates to selecting and opening applications. An electronic device includes a display and a rotatable input mechanism rotatable around a rotation axis substantially perpendicular to a normal axis that is normal to a face of the display. The device detects a user input, and in response to detecting the user input, displays a first subset of application views of a set of application views. The first subset of application views is displayed along a first dimension of the display substantially perpendicular to both the rotation axis and the normal axis. The device detects a rotation of the rotatable input mechanism, and in response to detecting the rotation, displays a second subset of application views of the set of application views. Displaying the second subset of application views includes moving the set of application views on the display along the first dimension of the display.
Abstract:
The present disclosure generally relates to managing user interfaces. In a method, a scrollable list of affordances associated with physical activities is displayed. A first change workout metrics affordance corresponding to a first affordance of the scrollable list of affordances is displayed. User input is received. In accordance with a determination that the user input is detected at the first affordance in the scrollable list of affordances, a physical activity tracking function associated with the selected first affordance is launched. In accordance with a determination that the user input is detected at the first change workout metrics affordance, a user interface configured to change a workout metric is displayed.
Abstract:
The present disclosure generally relates to playing and managing audio items. In some examples, an electronic device provides intuitive user interfaces for playing and managing audio items on the device. In some examples, an electronic device provides seamless transitioning from navigating a stack of items corresponding to groups of audio items to navigating a list of menus. In some examples, an electronic device provides for quick and easy access between different applications that are active on the device. In some examples, an electronic device enables automatic transmission of data associated with audio items to be stored locally on a linked external device.
Abstract:
An electronic device with a display and a touch-sensitive surface displays a user interface of an application. The device detects a first portion of an input including a contact on the touch-sensitive surface, and then detects a second portion of the input including movement of the contact across the touch-sensitive surface. The device displays, during the movement, application views including an application view that corresponds to the user interface of the application and another application view that corresponds to a different user interface of a different application. The device then detects a third portion of the input, including a liftoff of the contact from the touch-sensitive surface. In response, the device, upon determining that application-switcher-display criteria are met, displays an application-switcher user interface, and upon determining that home-display criteria are met, the device displays a home screen user interface that includes application launch icons.
Abstract:
The present disclosure generally relates to retrieving and displaying contextually-relevant media content. In some embodiments, a device receives a request to display contextually- relevant media and, in response, displays a representation of a collection of media items relevant to a context of the device. In some embodiments, a device displays a visual media item of a sequence of items and, in response to receiving a swipe gesture, displays a detail user interface comprising related content for the media item. In some embodiments, a device, while displaying a first detail user interface, displays an affordance corresponding to a plurality of individuals identified as having attended a first event, that when selected, causes display of visual media corresponding to a plurality of events attended by the individuals. In some embodiments, a device, in response to user input, obtains an automatically-generated collection of visual media and displays a corresponding affordance.
Abstract:
A mobile computing device can be used to locate a vehicle parking location in weak location signal scenarios (e.g., weak, unreliable, or unavailable GPS or other location technology). In particular, the mobile device can determine when a vehicle in which the mobile device is located has entered into a parked state. GPS or other primary location technology may be unavailable at the time the mobile device entered into a parked state (e.g., inside a parking structure). The location of the mobile device at a time corresponding to when the vehicle is identified as being parked can be determined using the first location technology as supplemented with sensor data of the mobile device. After the location of the mobile device at a time corresponding to when the vehicle is identified as being parked is determined, the determined location can be associated with an identifier for the current parking location.