Abstract:
A device with a display and, optionally, a touch- sensitive surface detects a first input corresponding to a request to share first content from a first application while displaying the first application on the display. In response to detecting the first input, the device displays a sharing interface that includes a plurality of options for sharing the first content. While displaying the sharing interface, the device detects selection of an affordance in the sharing interface. In accordance with a determination that the affordance is a respective user-first sharing option for a respective user, the device initiates a process for sharing the first content with the respective user. In accordance with a determination that the affordance is a protocol-first sharing option for a respective protocol, the device initiates a process for sharing the first content using the respective protocol.
Abstract:
Some embodiments of the invention provide a novel prediction engine that (1) can formulate predictions about current or future destinations and/or routes to such destinations for a user, and (2) can relay information to the user about these predictions. In some embodiments, this engine includes a machine-learning engine that facilitates the formulation of predicted future destinations and/or future routes to destinations based on stored, user-specific data. The user-specific data is different in different embodiments. In some embodiments, the stored, user-specific data includes data about any combination of the following: (1) previous destinations traveled to by the user, (2) previous routes taken by the user, (3) locations of calendared events in the user's calendar, (4) locations of events for which the user has electronic tickets, and (5) addresses parsed from recent e-mails and/or messages sent to the user. In some embodiments, the prediction engine only relies on user-specific data stored on the device on which this engine executes. Alternatively, in other embodiments, it relies only on user-specific data stored outside of the device by external devices/servers. In still other embodiments, the prediction engine relies on user-specific data stored both by the device and by other devices/servers.
Abstract:
Some embodiments of the invention provide a novel prediction engine that (1) can formulate predictions about current or future destinations and/or routes to such destinations for a user, and (2) can relay information to the user about these predictions. In some embodiments, this engine includes a machine-learning engine that facilitates the formulation of predicted future destinations and/or future routes to destinations based on stored, user-specific data. The user- specific data is different in different embodiments. In some embodiments, the stored, user- specific data includes data about any combination of the following: (1) previous destinations traveled to by the user, (2) previous routes taken by the user, (3) locations of calendared events in the user's calendar, (4) locations of events for which the user has electronic tickets, and (5) addresses parsed from recent e-mails and/or messages sent to the user. In some embodiments, the prediction engine only relies on user-specific data stored on the device on which this engine executes. Alternatively, in other embodiments, it relies only on user-specific data stored outside of the device by external devices/servers. In still other embodiments, the prediction engine relies on user-specific data stored both by the device and by other devices/servers.
Abstract:
A mobile device including a touchscreen display presents an image of a three-dimensional object. The display can concurrently present a user interface element that can be in the form of a virtual button. While the device's user touches and maintains fingertip contact with the virtual button via the touchscreen, the mobile device can operate in a special mode in which physical tilting of the mobile device about physical spatial axes causes the mobile device to adjust the presentation of the image of the three-dimensional object on the display, causing the object to be rendered from different viewpoints in the virtual space that the object virtually occupies. The mobile device can detect such physical tilting based on feedback from a gyroscope and accelerometer contained within the device.
Abstract:
A mobile device including a touchscreen display can detect multiple points of fingertip contact being made against the touchscreen concurrently. The device can distinguish this multi-touch gesture from other gestures based on the duration, immobility, and concurrency of the contacts. In response to detecting such a multi-touch gesture, the device can send a multi-touch event to an application executing on the device. The application can respond to the multi-touch event in a variety of ways. For example, the application can determine a distance of a path in between points on a map that a user has concurrently touched with his fingertips. The application can display this distance to the user.
Abstract:
A multifunction device displays a navigation user interface that includes: a navigation bar having a plurality of unit regions and a plurality of subunit regions. Each of the unit regions represents a range of values. Each subunit region represents a subset of a respective range of values. The navigation user interface also includes a content area for displaying content associated with subunit regions. In response to detecting an input that selects a respective subunit region, the multifunction device updates the content area in accordance with the respective selected subunit region. In response to detecting an input that selects a respective unit region, the multifunction device updates the navigation bar to include subunit regions in accordance with the selected unit region and updates the content area in accordance with at least one of the subunit regions in the updated navigation bar.
Abstract:
Among other things, techniques and systems are disclosed for implementing contextual voice commands. On a device, a data item in a first context is displayed. On the device, a physical input selecting the displayed data item in the first context is received. On the device, a voice input that relates the selected data item to an operation in a second context is received. The operation is performed on the selected data item in the second context.
Abstract:
A method is performed by an electronic device with a display and a touch-sensitive surface. The method includes: displaying a progress icon; while providing content with the electronic device: detecting a contact at a location that corresponds to the progress icon; detecting movement of the contact, wherein movement of the contact comprises a first component of movement on the touch-sensitive surface in a direction corresponding to movement on the display parallel to a first predefined direction and a second component of movement on the touch-sensitive surface in a direction corresponding to movement on the display perpendicular to the first predefined direction; and, while continuing to detect the contact on the touch-sensitive surface, moving the current position within the content at a scrubbing rate, wherein the scrubbing rate decreases as the second component of movement on the touch-sensitive surface increases.
Abstract:
In some embodiments, a device displays content on a touch screen display and detects input by finger gestures. In response to the finger gestures, the device selects content, visually distinguishes the selected content, and/or updates the selected content based on detected input. In some embodiments, the device displays a command display area that includes one or more command icons; detects activation of a command icon in the command display area; and, in response to detecting activation of the command icon in the command display area, performs a corresponding action with respect to the selected content. Exemplary actions include cutting, copying, and pasting content.
Abstract:
A portable electronic device (100) displays, on a touch screen display (112), a user interface (600B) for a phone application during a phone call. In response to detecting activation (618) of a menu icon (204) or menu button, the UI (600B) for the phone application is replaced with a menu of application icons (700A, 700B), while maintaining the phone call. In response to detecting a finger gesture (702, 708) on a non-telephone service application icon (144, 149-2), displaying a user interface for the non-telephone service application (3000R) while continuing to maintain the phone call, the UI for the non-telephone service application (3000R) including a switch application icon (3078) that is not displayed in the UI (3000R) when there is no ongoing phone call. In response to detecting a finger gesture on the switch application icon (3078), replacing display of the UI for the non-telephone service application (3000R) with a respective UI for the phone application (600B) while continuing to maintain the phone call.