Abstract:
A device with a display and a touch-sensitive surface displays a user interface including a user interface object at a first location. While displaying the user interface, the device detects a portion of an input, including a contact at a location on the touch-sensitive surface corresponding to the user interface object. In response to detecting the portion of the input: upon determining that the portion of the input meets menu-display criteria, the device displays a plurality of selectable options that corresponds to the user interface object on the display; and, upon determining that the portion of the input meets object-move criteria, the device moves the user interface object or a representation thereof from the first location to a second location according to the movement of the contact.
Abstract:
The present disclosure provides a method for mirrored control between devices performed at a first electronic device including one or more processors, memory, and a touch-sensitive display. The method includes: sending an item from a first instant messenger application running on the first electronic device to a second instant messenger application running on a second electronic device; displaying the item in the first instant messenger application, wherein the item is concurrently displayed in the second instant messenger application; receiving information corresponding to an interaction with the item; and in response to receiving information corresponding to the interaction, updating the item on the first electronic device, wherein the update to the item is mirrored on the second electronic device.
Abstract:
A device displays a camera user interface including a live view from a camera. While displaying the live view from the camera: the device records media images that are captured by the camera, while continuing to display the live view from the camera; and the device further displays representations of a plurality of media images that were recorded while displaying the live view from the camera as frames scrolling across the display in a first direction.
Abstract:
An application can generate multiple user interfaces for display across multiple electronic devices. After the electronic devices establish communication, an application running on at least one of the devices can present a first set of information items on a touch-enabled display of one of the electronic devices. The electronic device can receive a user selection of one of the first set of information items. In response to receiving the user selection, the application can generate a second set of information items for display on the other electronic device. The second set of information items can represent an additional level of information related to the selected information item.
Abstract:
The present application is related to a computer for providing output to a user. The computer includes a processor and an input device in communication with the processor. The input device includes a feedback surface and at least one sensor in communication with the feedback surface, the at least one sensor configured to detect a user input to the feedback surface. The processor varies a down-stroke threshold based on a first factor and varies an up-stroke threshold based on a second factor. The down-stroke threshold determines a first output of the computing device, the up-stroke threshold determines a second output of the computing device, and at least of the first factor or the second factor are determined based on the user input.
Abstract:
A computer system displays virtual objects overlaid on a view of a physical environment as a virtual effect. The computer system displays respective animated movements of the virtual objects over the view of the physical environment, wherein the respective animated movements are constrained in accordance with a direction of simulated gravity associated with the view of the physical environment. If current positions of virtual objects during the respective animated movement of the virtual objects corresponds to different surfaces at different heights detected in the view of the physical environment, the computer constrains the respective animated movements of the virtual objects in accordance with the different surfaces detected in the view of the physical environment.
Abstract:
A first electronic device establishes a wireless connection with a second electronic device that controls display of a user interface on a second display. The first device displays a first user interface that includes a first representation of a media item. While displaying the first user interface, the first electronic device detects a first user input. In response to detecting the first user input, the first electronic device transmits to the second electronic device instructions enabling display of at least a portion of the media item on substantially the entire second display controlled by the second electronic device. While the at least the portion of the media item is displayed on the second electronic device, the first electronic device displays additional information different from but related to the at least the portion of the media item that is displayed on the second electronic device.
Abstract:
An example method is performed at a device with a display and a biometric sensor. While the device is in a locked state, the method includes displaying a log-in user interface that is associated with logging in to a first and second user account. While displaying the log-in user interface, the method includes, receiving biometric information, and in response to receiving the biometric information: when the biometric information is consistent with biometric information for the first user account and the first user account does not have an active session, displaying a prompt to input a log-in credential for the first user account; and when the biometric information is consistent with biometric information for the second user account and the second user account does not have an active session on the device, displaying a prompt to input a log-in credential for the second user account.
Abstract:
A computer system displays, in a first viewing mode, a simulated environment that is oriented relative to a physical environment of the computer system. In response to detecting a first change in attitude, the computer system changes an appearance of a first virtual user interface object so as to maintain a fixed spatial relationship between the first virtual user interface object and the physical environment. The computing system detects a gesture. In response to detecting a second change in attitude, in accordance with a determination that the gesture met mode change criteria, the computer system transitions from displaying the simulated environment in the first viewing mode to displaying the simulated environment in a second viewing mode. Displaying the virtual model in the simulated environment in the second viewing mode includes forgoing changing the appearance of the first virtual user interface object to maintain the fixed spatial relationship.
Abstract:
An electronic device, while displaying a first user interface, detects an input for an input object, detects that first hover proximity criteria are met by the input object, and displays first visual feedback. While displaying the first visual feedback, the device detects a change in a current value of a hover proximity parameter of the input object and that second hover proximity criteria are met by the input object after the change. In response to detecting that the second hover proximity criteria are met, the device displays second visual feedback, distinct from the first visual feedback.