Abstract:
Electronic devices with improved methods and interfaces for messaging are disclosed, including improved ways to: acknowledge messages; edit previously sent messages; express what a user is trying to communicate; display private messages; synchronize viewing of content between users; incorporate handwritten inputs; quickly locate content in a message transcript; integrate a camera; integrate search and sharing; integrate interactive applications; integrate stickers; make payments; interact with avatars; make suggestions; navigate among interactive applications; manage interactive applications; translate foreign language text; combine messages into a group; and flag messages.
Abstract:
Electronic devices with improved methods and interfaces for messaging are disclosed, including improved ways to: acknowledge messages; edit previously sent messages; express what a user is trying to communicate; display private messages; synchronize viewing of content between users; incorporate handwritten inputs; quickly locate content in a message transcript; integrate a camera; integrate search and sharing; integrate interactive applications; integrate stickers; make payments; interact with avatars; make suggestions; navigate among interactive applications; manage interactive applications; translate foreign language text; combine messages into a group; and flag messages.
Abstract:
Systems and techniques are disclosed for controlling, from a mobile device, media content stored on the mobile device to a media client for presentation on a display device. Data can be provided from the mobile device to the media client for identifying the location of the media content and a playback time. Based on the data, the media client can obtain a portion of the media content associated with the playback time. Also, playback of the media content on the display device can be controlled by a user of the mobile device.
Abstract:
The present disclosure generally relates to generating and modifying virtual avatars. An electronic device having a camera and a display apparatus displays a virtual avatar that changes appearance in response to changes in a face in a field of view of the camera. In response to detecting changes in one or more physical features of the face in the field of view of the camera, the electronic device modifies one or more features of the virtual avatar.
Abstract:
An electronic device displays a messaging user interface of a message application, including a conversation transcript of a messaging session between a user of the electronic device and at least one other user, a message-input area, the conversation transcript including a plurality of messages and a plurality of message regions, each message region containing a respective message of the plurality of messages. In response to a first input corresponding to a first respective message in the conversation transcript, displaying an indication that the first respective message has been selected. In response to one or more second inputs, including message composition inputs, displaying in a second message region in the conversation transcript a second message corresponding to the message composition inputs, and displaying a grouping indicia that connects the first respective message region with the second message region.
Abstract:
The present disclosure generally relates to generating and modifying virtual avatars. An electronic device having a camera and a display apparatus displays a virtual avatar that changes appearance in response to changes in a face in a field of view of the camera. In response to detecting changes in one or more physical features of the face in the field of view of the camera, the electronic device modifies one or more features of the virtual avatar.
Abstract:
The disclosed embodiments provide a system that facilitates interaction between an electronic device and a remote display. The system includes a first application and an encoding apparatus on the electronic device, and a second application and a decoding apparatus on the remote display. The encoding apparatus obtains graphical output for a display of the electronic device and a first set of touch inputs associated with the graphical output from a first touch screen. Next, the encoding apparatus encodes the graphical output, and the first application transmits the graphical output and the first set of touch inputs to the remote display. Upon receiving the graphical output and the first set of touch inputs at the remote display, the decoding apparatus decodes the graphical output. The second application then uses the graphical output and a visual representation of the first set of touch inputs to drive the remote display.