Abstract:
An electronic device detecting an alert event; and, in response, delays provision of feedback indicative of the alert event until determining whether the electronic device is in a first use context or in a second use context. In response to determining whether the electronic device is in the first use context or the second use context, the device, in accordance with a determination that the electronic device is in the first use context, provides first feedback indicative of the alert event. The device, in accordance with a determination that the electronic device is in the second use context that is distinct from the first use context, provides second feedback indicative of the alert event.
Abstract:
The present disclosure relates generally to implementing biometric authentication. In some examples, a device provides user interfaces for a biometric enrollment process tutorial. In some examples, a device provides user interfaces for aligning a biometric feature for enrollment. In some examples, a device provides user interfaces for enrolling a biometric feature. In some examples, a device provides user interfaces for providing hints during a biometric enrollment process. In some examples, a device provides user interfaces for application-based biometric authentication. In some examples, a device provides user interfaces for autofilling biometrically secured fields. In some examples, a device provides user interfaces for unlocking a device using biometric authentication. In some examples, a device provides user interfaces for retrying biometric authentication. In some examples, a device provides user interfaces for managing transfers using biometric authentication. In some examples, a device provides interstitial user interfaces during biometric authentication. In some examples, a device provides user interfaces for preventing retrying biometric authentication. In some examples, a device provides user interfaces for cached biometric authentication. In some examples, a device provides user interfaces for autofilling fillable fields based on visibility criteria. In some examples, a device provides user interfaces for automatic log-in using biometric authentication.
Abstract:
The present disclosure generally relates to using avatars and image data for enhanced user interactions. In some examples, user status dependent avatars are generated and displayed with a message associated with the user status. In some examples, a device captures image information to scan an object to create a 3D model of the object. The device determines an algorithm for the 3D model based on the capture image information and provides visual feedback on additional image data that is needed for the algorithm to build the 3D model. In some examples, an application's operation on a device is restricted based on whether an authorized user is identified as using the device based on captured image data. In some examples, depth data is used to combine two sets of image data.
Abstract:
A computer system, while displaying a three-dimensional computer-generated environment, detects a first event that corresponds to a request to present first computer-generated content, and in response: in accordance with a determination that the first event corresponds to a respective request to present the first computer-generated content with a first level of immersion, the computer system displays the first visual content and outputs the first audio content using a first audio output mode; and in accordance with a determination that the first event corresponds to a respective request to present the first computer-generated content with a second level of immersion different from the first level of immersion, the computer system displays the first visual content and outputs the first audio content using a second audio output mode different from the first, which changes a level of immersion of the first audio content.
Abstract:
A computer system displays a representation of a field of view of one or more cameras that is updated with changes in the field of view. In response to a request to add an annotation, the representation of the field of view of the camera(s) is replaced with a still image of the field of view of the camera(s). An annotation is received on a portion of the still image that corresponds to a portion of a physical environment captured in the still image. The still image is replaced with the representation of the field of view of the camera(s). An indication of a current spatial relationship of the camera(s) relative to the portion of the physical environment is displayed or not displayed based on a determination of whether the portion of the physical environment captured in the still image is currently within the field of view of the camera(s).
Abstract:
A computer system concurrently displays, in an augmented reality environment, a representation of at least a portion of a field of view of one or more cameras that includes a respective physical object, which is updated as contents of the field of view change; and a respective virtual user interface object, at a respective location in the virtual user interface determined based on the location of the respective physical object in the field of view. While detecting an input at a location that corresponds to the displayed respective virtual user interface object, in response to detecting movement of the input relative to the respective physical object in the field of view of the one or more cameras, the system adjusts an appearance of the respective virtual user interface object in accordance with a magnitude of movement of the input relative to the respective physical object.
Abstract:
A computer system displays an annotation placement user interface that includes a representation of a field of view of one or more cameras that is updated over time based on changes in the field of view, a placement user interface element indicating a virtual annotation placement location. If the placement user interface element is over a representation of a physical feature in the physical environment that can be measured, the appearance of the placement user interface element changes in accordance with one or more aspects of the representation of the physical feature, and, in response to an input to perform one or more measurements of the physical feature: if the physical feature is a first type of feature, measurements of a first measurement type are made; and, if a second, different type of physical feature, measurements of a second, different measurement type are made.