Abstract:
In a system for dynamic switching and merging of head, gesture and touch input in virtual reality, a virtual object may be selected by a user in response to a first input implementing one of a number of different input modes. Once selected, with focus established on the first object by the first input, the first object may be manipulated in the virtual world in response to a second input implementing another of the different input modes. In response to a third input, another object may be selected, and focus may be shifted from the first object to the second object in response to a third input if, for example, a priority value of the third input is higher than a priority value of the first input that established focus on the first object. If the priority value of the third input is less than the priority value of the first input that established focus on the first object, focus may remain on the first object. In response to certain trigger inputs, a display of virtual objects may be shifted between a far field display and a near field display to accommodate a particular mode of interaction with and manipulation of the virtual objects.
Abstract:
In one general aspect, a system and method are described to generate a virtual environment for a user. The virtual environment may be generated with a first electronic device that is communicably coupled to a second electronic device. The method may include tracking movement of the second electronic device in an ambient environment, determining, using one or more sensors, a range of motion associated with the movement in the ambient environment, correlating the range of motion associated with the ambient environment to a range of motion associated with the virtual environment, determining, for a plurality of virtual objects, a virtual configuration adapted to the range of motion associated with the virtual environment the plurality of virtual objects according to the virtual configuration, and triggering rendering of the plurality of virtual objects according to the configuration.
Abstract:
In one general aspect, a system can generate, for a virtual environment, a plurality of non-contact targets, the plurality of non-contact targets each including interactive functionality associated with a virtual object. The system can additionally detect a first non-contact input and a second non-contact input and determine whether the first non-contact input satisfies a predefined threshold associated with at least one non-contact target, and upon determining that the first non-contact input satisfies the predefined threshold, provide for display in a head mounted display, the at least one non-contact target at the location. In response to detecting a second non-contact input at the location, the system can execute, in the virtual environment, the interactive functionality associated with the at least one non-contact target.
Abstract:
Computer-implemented systems and methods are described for configuring a plurality of privacy properties for a plurality of virtual objects associated with a first user and a virtual environment being accessed using a device associated with the first user, triggering for display, in the virtual environment, the plurality of virtual objects to the first user accessing the virtual environment, determining whether at least one virtual object is associated with a privacy setting corresponding to the first user. In response to determining that a second user is attempting to access the one virtual object, a visual modification may be applied to the object based on a privacy setting. The method may also include triggering for display, the visual modification of the at least one virtual object, to the second user while continuing to trigger display of the at least one virtual object without the visual modification to the first user.
Abstract:
In a system for intelligent placement and sizing of virtual objects in a three dimensional virtual model of an ambient environment, the system may collect image information and feature information of the ambient environment, and may process the collected information to render the three dimensional virtual model. From the collected information, the system may define a plurality of drop target areas in the virtual model, each of the drop target areas having associated dimensional, textural, and orientation parameters. When placing a virtual object in the virtual model, or placing a virtual window for launching an application in the virtual model, the system may select a placement for the virtual object or virtual window, and set a sizing for the virtual object or virtual window, based on the parameters associated with the plurality of drop targets.
Abstract:
Systems and methods are described that are configured to obtain tracking data corresponding to a plurality of users accessing a virtual reality environment. The tracking data may include information associated with a plurality of movements performed by a first user in a physical environment. The systems and methods may be configured to modify display data associated with the plurality of movements, in response to determining that the information is private, and provide, in the virtual environment, the modified display data to a second user in the plurality of users, while displaying unmodified display data to the first user.
Abstract:
In at least one general aspect, a method can include determining a physics parameter based at least in part on a scale of user relative to an object in a virtual reality environment, applying a physics rule to an interaction between the user and the object in the virtual reality environment based on the physics parameter, and modifying the physics parameter based at least in part on a relative change in scale between the user and the object.
Abstract:
In one aspect, a method and system are described for receiving input for a virtual user in a virtual environment. The input may be based on a plurality of movements performed by a user accessing the virtual environment. Based on the plurality of movements, the method and system can include detecting that at least one portion of the virtual user is within a threshold distance of a collision zone, the collision zone being associated with at least one virtual object. The method and system can also include selecting a collision mode for the virtual user based on the at least one portion and the at least one virtual object and dynamically modifying the virtual user based on the selected collision mode.
Abstract:
In one general aspect, a method can include executing, by a computing device, a virtual reality (VR) application, providing, by the computing device, content for display on a screen of a VR headset in a VR space, the content including at least one object being associated with an action, detecting a first movement of a user immersed in the VR space towards the at least one object included in the VR space, and performing the associated action in the VR space based on detecting the first movement.
Abstract:
A system and method of operating an audio visual system generating a virtual immersive experience may include an electronic user device in communication with a tracking device that may track a user's physical movement in a real world space and translate the tracked physical movement into corresponding movement in the virtual world generated by the user device. The system may detect when a user and the user device are approaching a boundary of a tracking area and automatically initiate a transition out of the virtual world and into the real world. A smooth, or graceful, transition between the virtual world and the real world as the user encounters this boundary may avoid disorientation which may occur as a user continues to move in the real world, while motion appears to have stopped upon reaching the tracking boundary.