Abstract:
A unique storytelling system is disclosed herein in which a plug-and-play hardware platform with an integrated augmented reality (AR) environment brings stories to life. The storytelling system includes an electronics toolkit and a structure toolkit which enable the user to prototype interactive physical devices for storytelling. The interactive physical devices crafted by the user are easily programmed using a simple visual programing environment of the storytelling system. Additionally, a story event planning tool of the storytelling system enables the user to create customized interactions between the interactive physical devices and virtual AR objects, such as virtual AR avatars or the like. Finally, an AR storytelling application of the storytelling system utilizes an AR device, such as a smartphone, to bring the interactive physical devices to life and enable the user to tell stories using the custom interactions that he or she created.
Abstract:
A method for hand pose identification in an automated system includes providing depth map data of a hand of a user to a first neural network trained to classify features corresponding to a joint angle of a wrist in the hand to generate a first plurality of activation features and performing a first search in a predetermined plurality of activation features stored in a database in the memory to identify a first plurality of hand pose parameters for the wrist associated with predetermined activation features in the database that are nearest neighbors to the first plurality of activation features. The method further includes generating a hand pose model corresponding to the hand of the user based on the first plurality of hand pose parameters and performing an operation in the automated system in response to input from the user based on the hand pose model.
Abstract:
A method for hand pose identification in an automated system includes providing depth map data of a hand of a user to a first neural network trained to classify features corresponding to a joint angle of a wrist in the hand to generate a first plurality of activation features and performing a first search in a predetermined plurality of activation features stored in a database in the memory to identify a first plurality of hand pose parameters for the wrist associated with predetermined activation features in the database that are nearest neighbors to the first plurality of activation features. The method further includes generating a hand pose model corresponding to the hand of the user based on the first plurality of hand pose parameters and performing an operation in the automated system in response to input from the user based on the hand pose model.
Abstract:
A virtual reality system, comprising an electronic 2d interface having a depth sensor, the depth sensor allowing a user to provide input to the system to instruct the system to create a virtual 3D object in a real-world environment. The virtual 3D object is created with reference to at least one external physical object in the real-world environment, with the external physical object concurrently displayed with the virtual 3D object by the interface. The virtual 3D object is based on physical artifacts of the external physical object.
Abstract:
A collaborative 3D modeling system, comprising a computer processing unit, a digital memory, and an electronic display, the computer processing unit and the digital memory configured to provide 3D model representations of a first plurality of versions of an object component for a first user, the versions being selectable along a first axis, and using the electronic display, provide a plurality of user identifications which are selectable along a second axis, wherein selecting a subsequent user causes a second plurality of said versions of said object component to be displayed on the electronic display.
Abstract:
A method of manipulating a three-dimensional image file including a virtual object includes obtaining image information in a processing device of a non-instrumented physical object manipulated by a user, such image information including movement information; and causing virtual movement of the virtual object based on the movement information. A method of shaping a virtual object includes obtaining image information including movement information; and determining a shape of the virtual object based on the movement information. A method of modifying a virtual object includes obtaining image information including movement information; and altering a virtual surface appearance of at least a part of the virtual object based on the movement information. Systems and computer-readable media are also described.
Abstract:
An augmented reality (AR) interaction authoring system is described. The AR interaction authoring system is configured to support the real-time creation of AR applications to support AR-enhanced toys. The design of the AR interaction authoring system enables bidirectional interactions between the physical-virtual space of toys and AR. The AR interaction authoring system allows intuitive authoring of AR animations and toys actuations through programming by demonstration, while referring to the physical toy for a contextual reference. Using a visual programming interface, users can create bidirectional interactions by utilizing users' input on toys to trigger AR animations and vice versa. A plug-and-play IoT toolkit is also disclosed that includes hardware to actuate common toys. In this way, users can effortlessly integrate toys into the virtual world in an impromptu design process, without lengthy electronic prototyping.
Abstract:
The disclosed system and method enable hand-object interaction with a virtual object in augmented reality or virtual reality. The system and method advantageously suggest particular physical objects in the environment to be used as physical proxies for virtual objects to be interacted with. The system and method maintain physical and mental consistency in the user experience by recommending physical proxies in a manner that takes into consideration the interaction constraints. Finally, the system and method advantageously incorporate a mapping process that takes into consideration the object, the hand gesture, and the contact points on both the physical and virtual object, thereby providing consistent visualization of the virtual hand-object interactions to the users.
Abstract:
A digital instrument tutorial system is introduced herein, which enables the authoring and provision of augmented reality (AR) tutorials for operating digital instruments. The digital instrument tutorial system provides an automated authoring workflow for users (e.g., an expert user and/or author) to create sequential AR tutorials for digital instruments by intuitive embodied demonstration. The digital instrument tutorial system advantageously utilizes a multimodal approach that combines finger pressure and gesture tracking to translate the author's operations into AR visualizations. Aside from recording a tutorial for a task, the digital instrument tutorial system also provides an access mode, in which the AR tutorial is provided to a novice user.
Abstract:
An approach for pose estimation is disclosed that can mitigate the effect of occlusions. A POse Relation Transformer (PORT) module is configured to reconstruct occluded joints given the visible joints utilizing joint correlations by capturing the implicit joint occlusions. The PORT module captures the global context of the pose using self-attention and a local context by aggregating adjacent joint features. To train the PORT module to learn joint correlations, joints are randomly masked and the PORT module learns to reconstruct the masked joints, referred to as Masked Joint Modeling (MJM). Notably, the PORT module is a model-agnostic plug-in for pose refinement under occlusion that can be plugged into any existing or future keypoint detector with substantially low computational costs.