Abstract:
An electronic device displays a representative image. The representative image is one image in a sequence of images that includes images acquired after the representative image. While displaying the representative image, the device detects a contact with a first intensity. In response to detecting the contact, the device advances through the images acquired after the representative image at a rate based on the first intensity. When the device detects a decrease in intensity of the contact to a second intensity that is less than the first intensity, the device either continues to advance through the one or more images at a slower rate or reverses direction, depending at least in part on the second intensity relative to a threshold intensity.
Abstract:
An electronic device with a display, a touch-sensitive surface, and one or more sensors to detect intensity of contacts: displays an application launching user interface; detects a first touch input that includes detecting a first contact at a location on the touch-sensitive surface that corresponds to a first application icon for launching a first application that is associated with one or more corresponding quick actions; in response to detecting the first touch input, in accordance with a determination that the first touch input meets one or more application-launch criteria, launches the first application; and, in accordance with a determination that the first touch input meets one or more quick-action-display criteria, which include a criterion that is met when the characteristic intensity of the first contact increases above a respective intensity threshold, concurrently displays one or more quick action objects associated with the first application along with the first application icon.
Abstract:
Techniques and systems for centralized access to multimedia content stored on or available to a computing device are disclosed. The centralized access can be provided by a media control interface that receives user inputs and interacts with media programs resident on the computing device to produce graphical user interfaces that can be presented on a display device.
Abstract:
A device can receive live video of a real-world, physical environment on a touch sensitive surface. One or more objects can be identified in the live video. An information layer can be generated related to the Objects. In some implementations, the information layer can include annotations made by a user through the touch sensitive surface. The information layer and live video can be combined in a display of the device. Data can be received from one or more onboard sensors indicating that the device is in motion. The sensor data can be used to synchronize the live video and the information layer as the perspective of video camera view changes due to the motion. The live video and information layer can be shared with other devices over a communication link.
Abstract:
A device can receive live video of a real-world, physical environment on a touch sensitive surface. One or more objects can be identified in the live video. An information layer can be generated related to the objects. In some implementations, the information layer can include annotations made by a user through the touch sensitive surface. The information layer and live video can be combined in a display of the device. Data can be received from one or more onboard sensors indicating that the device is in motion. The sensor data can be used to synchronize the live video and the information layer as the perspective of video camera view changes due to the motion. The live video and information layer can be shared with other devices over a communication link.
Abstract:
Techniques and systems for centralized access to multimedia content stored on or available to a computing device are disclosed. The centralized access can be provided by a media control interface that receives user inputs and interacts with media programs resident on the computing device to produce graphical user interfaces that can be presented on a display device.
Abstract:
A device can receive live video of a real-world, physical environment on a touch sensitive surface. One or more objects can be identified in the live video. An information layer can be generated related to the objects. In some implementations, the information layer can include annotations made by a user through the touch sensitive surface. The information layer and live video can be combined in a display of the device. Data can be received from one or more onboard sensors indicating that the device is in motion. The sensor data can be used to synchronize the live video and the information layer as the perspective of video camera view changes due to the motion. The live video and information layer can be shared with other devices over a communication link.
Abstract:
A device can receive live video of a real-world, physical environment on a touch sensitive surface. One or more objects can be identified in the live video. An information layer can be generated related to the objects. In some implementations, the information layer can include annotations made by a user through the touch sensitive surface. The information layer and live video can be combined in a display of the device. Data can be received from one or more onboard sensors indicating that the device is in motion. The sensor data can be used to synchronize the live video and the information layer as the perspective of video camera view changes due to the motion. The live video and information layer can be shared with other devices over a communication link.
Abstract:
An application launching user interface that includes a plurality of application icons for launching corresponding applications is displayed. A first touch input is detected on a first application icon of the plurality of application icons. The first application icon is for launching a first application that is associated with one or more corresponding quick actions. If the first touch input meets one or more application-launch criteria which require that the first touch input has ended without having met a first input threshold, the first application is launched in response to the first touch input. If the first touch input meets one or more quick-action-display criteria which require that the first touch input meets the first input threshold, one or more quick action objects associated with the first application are concurrently displayed along with the first application icon without launching the first application, in response to the first touch input.