Patent classifications
A63F13/26
Ocular optical system
An ocular optical system configured to allow imaging rays from a display image to enter an observer's eye through the ocular optical system so as to form an image is provided. A direction toward the eye is an eye side, and a direction toward the display image is a display side. The ocular optical system sequentially includes a first and a second lens elements having refracting power from the eye side to the display side along an optical axis. Each lens element includes an eye-side surface and a display-side surface. An optical axis region of the eye-side surface of the first lens element is concave. An optical axis region of the eye-side surface of the second lens element is concave.
Ocular optical system
An ocular optical system configured to allow imaging rays from a display image to enter an observer's eye through the ocular optical system so as to form an image is provided. A direction toward the eye is an eye side, and a direction toward the display image is a display side. The ocular optical system sequentially includes a first and a second lens elements having refracting power from the eye side to the display side along an optical axis. Each lens element includes an eye-side surface and a display-side surface. An optical axis region of the eye-side surface of the first lens element is concave. An optical axis region of the eye-side surface of the second lens element is concave.
Multipoint SLAM capture
“Feature points” in “point clouds” that are visible to multiple respective cameras (i.e., aspects of objects imaged by the cameras) are reported via wired and/or wireless communication paths to a compositing processor which can determine whether a particular feature point “moved” a certain amount relative to another image. In this way, the compositing processor can determine, e.g., using triangulation and recognition of common features, how much movement occurred and where any particular camera was positioned when a latter image from that camera is captured. Thus, “overlap” of feature points in multiple images is used so that the system can close the loop to generate a SLAM map. The compositing processor, which may be implemented by a server or other device, generates the SLAM map by merging feature point data from multiple imaging devices.
Multipoint SLAM capture
“Feature points” in “point clouds” that are visible to multiple respective cameras (i.e., aspects of objects imaged by the cameras) are reported via wired and/or wireless communication paths to a compositing processor which can determine whether a particular feature point “moved” a certain amount relative to another image. In this way, the compositing processor can determine, e.g., using triangulation and recognition of common features, how much movement occurred and where any particular camera was positioned when a latter image from that camera is captured. Thus, “overlap” of feature points in multiple images is used so that the system can close the loop to generate a SLAM map. The compositing processor, which may be implemented by a server or other device, generates the SLAM map by merging feature point data from multiple imaging devices.
TERMINAL DISPLAY CONTROL METHOD, TERMINAL DISPLAY SYSTEM AND SERVER APPARATUS
According to one embodiment, a terminal display control method for using as a controller of a game a terminal device including a touch panel, which is configured to be integral with a display, includes selecting a controller image which is used in the game; setting at least one of a presence/absence of a function, a number, a size, a shape and a position of disposition, with respect to an operation element in the controller image; and making variable, in accordance with a content of the setting, a state of the controller image displayed on the display, and an operation standard of the operation element corresponding to an input on the touch panel.
TERMINAL DISPLAY CONTROL METHOD, TERMINAL DISPLAY SYSTEM AND SERVER APPARATUS
According to one embodiment, a terminal display control method for using as a controller of a game a terminal device including a touch panel, which is configured to be integral with a display, includes selecting a controller image which is used in the game; setting at least one of a presence/absence of a function, a number, a size, a shape and a position of disposition, with respect to an operation element in the controller image; and making variable, in accordance with a content of the setting, a state of the controller image displayed on the display, and an operation standard of the operation element corresponding to an input on the touch panel.
Display Screen Front Panel of HMD for Viewing by Users Viewing the HMD Player
Method for providing image of HMD user to a non-HMD user includes, receiving a first image of a user including the user's facial features captured by an external camera when the user is not wearing a head mounted display (HMD). A second image capturing a portion of the facial features of the user when the user is wearing the HMD is received. An image overlay data is generated by mapping contours of facial features captured in the second image with contours of corresponding facial features captured in the first image. The image overlay data is forwarded to the HMD for rendering on a second display screen that is mounted on a front face of the HMD.
Dynamic Entering and Leaving of Virtual-Reality Environments Navigated by Different HMD Users
Systems and methods for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes are provided. A computer-implemented method includes providing a first perspective of a VR scene to a first HMD of a first user and receiving an indication that a second user is requesting to join the VR scene provided to the first HMD. The method further includes obtaining real-world position and orientation data of the second HMD relative to the first HMD and then providing, based on said data, a second perspective of the VR scene. The method also provides that the first and second perspectives are each controlled by respective position and orientation changes while viewing the VR scene.
METHOD AND APPARATUS FOR DELEGATING RESOURCES BETWEEN DEVICES
A system performs operations including detecting a request to present a game application, receiving a resources identifier from a second communication device, determining from the resources identifier that the second communication device has one of a computing resource, a presentation resource, or both, selecting a configuration from a plurality of configurations according to an identity of the gaming application and the resources identifier, selecting according to the configuration at least one resource from one of the computing resource, the presentation resource, or both of the second communication device, and delegating processing by the first communication device of a portion of the gaming application according to the at least one resource of the second communication device.
METHOD AND APPARATUS FOR DELEGATING RESOURCES BETWEEN DEVICES
A system performs operations including detecting a request to present a game application, receiving a resources identifier from a second communication device, determining from the resources identifier that the second communication device has one of a computing resource, a presentation resource, or both, selecting a configuration from a plurality of configurations according to an identity of the gaming application and the resources identifier, selecting according to the configuration at least one resource from one of the computing resource, the presentation resource, or both of the second communication device, and delegating processing by the first communication device of a portion of the gaming application according to the at least one resource of the second communication device.