Abstract:
A computer system displays virtual objects overlaid on a view of a physical environment as a virtual effect. The computer system displays respective animated movements of the virtual objects over the view of the physical environment, wherein the respective animated movements are constrained in accordance with a direction of simulated gravity associated with the view of the physical environment. If current positions of virtual objects during the respective animated movement of the virtual objects corresponds to different surfaces at different heights detected in the view of the physical environment, the computer constrains the respective animated movements of the virtual objects in accordance with the different surfaces detected in the view of the physical environment.
Abstract:
A computer system displays, in a first viewing mode, a simulated environment that is oriented relative to a physical environment of the computer system. In response to detecting a first change in attitude, the computer system changes an appearance of a first virtual user interface object so as to maintain a fixed spatial relationship between the first virtual user interface object and the physical environment. The computing system detects a gesture. In response to detecting a second change in attitude, in accordance with a determination that the gesture met mode change criteria, the computer system transitions from displaying the simulated environment in the first viewing mode to displaying the simulated environment in a second viewing mode. Displaying the virtual model in the simulated environment in the second viewing mode includes forgoing changing the appearance of the first virtual user interface object to maintain the fixed spatial relationship.
Abstract:
A computer system displays a representation of a field of view of one or more cameras that is updated with changes in the field of view. In response to a request to add an annotation, the representation of the field of view of the camera(s) is replaced with a still image of the field of view of the camera(s). An annotation is received on a portion of the still image that corresponds to a portion of a physical environment captured in the still image. The still image is replaced with the representation of the field of view of the camera(s). An indication of a current spatial relationship of the camera(s) relative to the portion of the physical environment is displayed or not displayed based on a determination of whether the portion of the physical environment captured in the still image is currently within the field of view of the camera(s).
Abstract:
Techniques are disclosed for stabilizing a stream of spherical images captured by an image capture device to produce a stabilized spherical video sequence. The rotation of the image capture device during capture may be corrected in one or more desired axial directions in a way that is agnostic to the translation of the image capture device. The rotation of the image capture device may also be corrected in one or more desired axial directions in a way that is aware of the translation of the image capture device. For example, the assembled output spherical video sequence may be corrected to maintain the horizon of the scene at a constant location, regardless of the translation of the image capture device (i.e., a ‘translation-agnostic’ correction), while simultaneously being corrected to maintain the yaw of the scene in the direction of the image capture device's translation through three-dimensional space (i.e., a ‘translation-aware’ correction).
Abstract:
A computer system displays virtual objects overlaid on a view of a physical environment as a virtual effect. The computer system displays respective animated movements of the virtual objects over the view of the physical environment, wherein the respective animated movements are constrained in accordance with a direction of simulated gravity associated with the view of the physical environment. If current positions of virtual objects during the respective animated movement of the virtual objects corresponds to different surfaces at different heights detected in the view of the physical environment, the computer constrains the respective animated movements of the virtual objects in accordance with the different surfaces detected in the view of the physical environment.
Abstract:
Systems and processes for operating an intelligent automated assistant are provided. In one example process, a speech input is received from a user. In response to determining that the speech input corresponds to a user intent of obtaining information associated with a user experience of the user, one or more parameters referencing a user experience of the user are identified. Metadata associated with the referenced user experience is obtained from an experiential data structure. Based on the metadata, one or more media items associated with the referenced are retrieved based on the metadata. The one or more media items associated with the referenced user experience are output together.
Abstract:
A computer system concurrently displays, in an augmented reality environment, a representation of at least a portion of a field of view of one or more cameras that includes a respective physical object, which is updated as contents of the field of view change; and a respective virtual user interface object, at a respective location in the virtual user interface determined based on the location of the respective physical object in the field of view. While detecting an input at a location that corresponds to the displayed respective virtual user interface object, in response to detecting movement of the input relative to the respective physical object in the field of view of the one or more cameras, the system adjusts an appearance of the respective virtual user interface object in accordance with a magnitude of movement of the input relative to the respective physical object.
Abstract:
An electronic device includes a touch-sensitive surface, a display, and a camera sensor. The device displays a message region for displaying a message conversation and receives a request to add media to the message conversation. Responsive to receiving the request, the device displays a media selection interface concurrently with at least a portion of the message conversation. The media selection interface includes a plurality of affordances for selecting media for addition to the message conversation, the plurality of affordances includes a live preview affordance, at least a subset of the plurality of affordances includes thumbnail representations of media available for adding to the message conversation, and the live preview affordance is associated with a live camera preview. Responsive to detecting selection of the live preview affordance, the device captures a new image based on the live camera preview and selects the new image for addition to the message conversation.
Abstract:
An event can be detected by an input device. The event may be determined to be a triggering event by comparing the event to a group of triggering events. A first prediction model corresponding to the event is then selected. Contextual information about the device specifying one or more properties of the computing device in a first context is then received, and a set of one or more applications is identified. The set of one or more applications may have at least a threshold probability of being accessed by the user when the event occurs in the first context. Thereafter, a user interface is provided to a user for interacting with the set of one or more applications.
Abstract:
A first device sends a request to a second device to initiate a shared annotation session. In response to receiving acceptance of the request, a first prompt to move the first device toward the second device is displayed. In accordance with a determination that connection criteria for the first device and the second device are met, a representation of a field of view of the camera(s) of the first device is displayed in the shared annotation session with the second device. During the shared annotation session, one or more annotations are displayed via the first display generation component and one or more second virtual annotations corresponding to annotation input directed to the respective location in the physical environment by the second device is displayed via the first display generation component, provided that the respective location is included in the field of view of the first set of cameras.