Patent classifications
A63F13/5255
DEVICE AND METHOD FOR RENDERING VR CONTENT THAT PROVIDE BIRD'S EYE VIEW PERSPECTIVES
The present disclosure relates to device and method for rendering VR content that provide bird's eye view perspectives, which can render a user's avatar and game content selected by the at least one user in a VR space, and provide a VR image like the avatar plays the game content, thereby providing an effect to play a game while viewing the game content from various points of view as well as a bird's eye view in the at least one user's perspective.
DEVICE AND METHOD FOR RENDERING VR CONTENT THAT PROVIDE BIRD'S EYE VIEW PERSPECTIVES
The present disclosure relates to device and method for rendering VR content that provide bird's eye view perspectives, which can render a user's avatar and game content selected by the at least one user in a VR space, and provide a VR image like the avatar plays the game content, thereby providing an effect to play a game while viewing the game content from various points of view as well as a bird's eye view in the at least one user's perspective.
Virtual, augmented and extended reality system
Systems, methods and apparatus for multi-realm, computer-generated reality systems are disclosed. A method for managing a multi-realm, computer-generated reality includes determining one or more variances between each activity of a first participant and a corresponding baseline activity for each of a plurality of activities associated with traversal of a managed reality system during a session, and quantifying the one or more variances to obtain a performance metric. The method includes combining at least one performance metric for each activity of the first participant to obtain a session performance measurement for the first participant.
Dynamic Mixed Reality Content in Virtual Reality
In one embodiment, a method includes using one or more cameras of a mobile computing device to capture one or more images of a first user wearing a VR display device in a real-world environment. The mobile computing device transmits a pose of the mobile computing device with respect to the VR display device to a VR system. The mobile computing device receives from the VR system a VR rendering of a VR environment. The VR rendering is from the perspective of the mobile computing device with respect to the VR display device. The method includes segmenting the first user from the one or more images and generating, in real-time responsive to capturing the one or more images, a MR rendering of the first user in the VR environment. The MR rendering of the first user is based on a compositing of the segmented one or more images of the first user and the VR rendering.
Hybrid lens for head mount display
A lens assembly, related methods and constituent optical elements are described. The assembly may be used to direct and focus light for various applications. In one instance, the lens assembly is used in conjunction with one or more sources of light such as projected images or video as part of a virtual reality system. The lens assembly includes two or more optical elements arranged to receive light or direct light through different spatial regions of the assembly at different focal powers corresponding to a first user viewing zone and a second user viewing zone. In one instance, the first user viewing zone is a peripheral viewing zone and the second viewing zone is a primary or non-peripheral viewing zone (or vice versa).
Hybrid lens for head mount display
A lens assembly, related methods and constituent optical elements are described. The assembly may be used to direct and focus light for various applications. In one instance, the lens assembly is used in conjunction with one or more sources of light such as projected images or video as part of a virtual reality system. The lens assembly includes two or more optical elements arranged to receive light or direct light through different spatial regions of the assembly at different focal powers corresponding to a first user viewing zone and a second user viewing zone. In one instance, the first user viewing zone is a peripheral viewing zone and the second viewing zone is a primary or non-peripheral viewing zone (or vice versa).
Environment model with surfaces and per-surface volumes
In one embodiment, a method includes receiving sensor data of a scene captured using one or more sensors, generating (1) a number of virtual surfaces representing a number of detected planar surfaces in the scene and (2) a point cloud representing detected features of objects in the scene based on the sensor data, assigning each point in the point cloud to one or more of the number of virtual surfaces, generating occupancy volumes for each of the number of virtual surfaces based on the points assigned to the virtual surface, generating a datastore including the number of virtual surfaces, the occupancy volumes of each of the number of virtual surfaces, and a spatial relationship between the number of virtual surfaces, receiving a query, and sending a response to the query, the response including an identified subset of the plurality of virtual surfaces in the datastore that satisfy the query.
VIRTUAL RESOURCE DISPLAY METHOD AND RELATED APPARATUS
A virtual resource display method is provided. In the method, at least one target virtual object is displayed in a virtual scene in response to at least one target virtual item of the virtual scene reaching a target state through user interaction. The at least one target virtual object is controlled to move in the virtual scene in response to a shooting operation on the at least one target virtual object. A target virtual resource is displayed in the virtual scene in response to a location of the at least one target virtual object meeting a first target condition.
MIXED-REALITY DEVICE POSITIONING BASED ON SHARED LOCATION
Techniques and systems are provided for positioning mixed-reality devices within mixed-reality environments. The devices, which are configured to perform inside out tracking, transition between position tracking states in mixed-reality environments and utilize positional information from other inside out tracking devices that share the mixed-reality environments to identify/update positioning of the devices when they become disoriented within the environments and without requiring an extensive or full scan and comparison/matching of feature points that are detectable by the devices with mapped feature points of the maps associated with the mixed-reality environments. Such techniques can conserve processing and power consumption that would be required when performing a full or extensive scan and comparison of matching feature points. Such techniques can also enhance the accuracy and speed of positioning mixed-reality devices.
Systems and methods for assisting virtual gestures based on viewing frustum
An endpoint system including one or more computing devices presents an object in a virtual environment (e.g., a shared virtual environment); receives gaze input corresponding to a gaze of a user of the endpoint system; calculates a gaze vector based on the gaze input; receives motion input corresponding to an action of the user; determines a path adjustment (e.g., by changing motion parameters such as trajectory and velocity) for the object based at least in part on the gaze vector and the motion input; and simulates motion of the object within the virtual environment based at least in part on the path adjustment. The object may be presented as being thrown by an avatar, with a flight path based on the path adjustment. The gaze vector may be based on head orientation information, eye tracking information, or some combination of these or other gaze information.