Abstract:
A method includes generating contextual feedback in a neuromorphic model. The neuromorphic model includes one or more assets to be monitored during development of the neuromorphic model. The method further includes displaying an interactive context panel to show a representation based on the contextual feedback.
Abstract:
Systems and methods are provided for trimming content for projection within the bounds of a projection target. The systems and methods trim the content for projection based on one or more characteristics of the projection target, including a shape, outline, and distance to the projection target. Moreover, the systems and methods designate void areas where no content will be projected based on the one or more characteristics, and the void areas will be generated or otherwise projected along with the content so that the content is projected onto the projection target and the void areas are projected outside of the projection target such that the projected content does not significantly spill onto surfaces or objects outside of the projection target.
Abstract:
A user interface transition between a camera view and a map view displayed on a mobile platform is provided so as present a clear visual connection between the orientation in the camera view and the orientation in the map view. The user interface transition may be in response to a request to change from the camera view to the map view or vice-versa. Augmentation overlays for the camera view and map view may be produced based on, e.g., the line of sight of the camera or identifiable environmental characteristics visible in the camera view and the map view. One or more different augmentation overlays are also produced and displayed to provide the visual connection between the camera view and map view augmentation overlays. For example, a plurality of augmentation overlays may be displayed consecutively to clearly illustrate the changes between the camera view and map view augmentation overlays.
Abstract:
Disclosed is a method and apparatus for biometric based media data sharing. The method may include initiating, in a first device, biometric data capture of a user based, at least in part, on playback of media data by the first device. The method may also include determining that captured biometric data of the user does not correspond with biometric data associated with an authorized user of the first device. Furthermore, the method may also include in response to a failure to match the captured biometric data by the first device, establishing that the user is an authorized user of a second device based, at least in part, on the captured biometric data. The method may also include sharing the media data with the second device.
Abstract:
Methods, apparatuses, systems, and computer-readable media for taking great pictures at an event or an occasion. The techniques described in embodiments of the invention are particularly useful for tracking an object, such as a person dancing or a soccer ball in a soccer game and automatically taking pictures of the object during the event. The user may switch the device to an Event Mode that allows the user to delegate some of the picture-taking responsibilities to the device during an event. In the Event Mode, the device identifies objects of interest for the event. Also, the user may select the objects of interest from the view displayed by the display unit. The device may also have pre-programmed objects including objects that the device detects. In addition, the device may also detect people from the users' social networks by retrieving images from social networks like Facebook® and Linkedln®.
Abstract:
A user interface transition between a camera view and a map view displayed on a mobile platform is provided so as present a clear visual connection between the orientation in the camera view and the orientation in the map view. The user interface transition may be in response to a request to change from the camera view to the map view or vice-versa. Augmentation overlays for the camera view and map view may be produced based on, e.g., the line of sight of the camera or identifiable environmental characteristics visible in the camera view and the map view. One or more different augmentation overlays are also produced and displayed to provide the visual connection between the camera view and map view augmentation overlays. For example, a plurality of augmentation overlays may be displayed consecutively to clearly illustrate the changes between the camera view and map view augmentation overlays.
Abstract:
A method includes generating contextual feedback in a neuromorphic model. The neuromorphic model includes one or more assets to be monitored during development of the neuromorphic model. The method further includes displaying an interactive context panel to show a representation based on the contextual feedback.
Abstract:
Systems and methods are provided for trimming content for projection within the bounds of a projection target. The systems and methods trim the content for projection based on one or more characteristics of the projection target, including a shape, outline, and distance to the projection target. Moreover, the systems and methods designate void areas where no content will be projected based on the one or more characteristics, and the void areas will be generated or otherwise projected along with the content so that the content is projected onto the projection target and the void areas are projected outside of the projection target such that the projected content does not significantly spill onto surfaces or objects outside of the projection target.