Patent classifications
A63F13/5255
Game perspective control method and apparatus
A game perspective control method and apparatus are provided. The method includes: in response to an character selection operation acting on a second display area of a graphical user interface, a target game character is selected in a game scene; a perspective switching control is provided on the second display area, the perspective switching control being configured to switch between a game scene area where a current position of the target game character is located and a game scene area where a target position of the target game character is located; and, in response to a perspective switching operation acting on the perspective switching control, a first display area is controlled to display a game scene corresponding to the perspective switching operation.
Game perspective control method and apparatus
A game perspective control method and apparatus are provided. The method includes: in response to an character selection operation acting on a second display area of a graphical user interface, a target game character is selected in a game scene; a perspective switching control is provided on the second display area, the perspective switching control being configured to switch between a game scene area where a current position of the target game character is located and a game scene area where a target position of the target game character is located; and, in response to a perspective switching operation acting on the perspective switching control, a first display area is controlled to display a game scene corresponding to the perspective switching operation.
Realistic virtual/augmented/mixed reality viewing and interactions
The present invention discloses systems and methods for both viewing and interacting with a virtual reality (VR), an augmented reality (AR) or a mixed reality (MR). More specifically, the systems and methods allow the user to interact with aspects of such realities including virtual items presented in such realities or within such environments by manipulating a control device that has an inside-out camera mounted on-board. The apparatus or system uses two distinct representations including a reduced representation in determining the pose of the control device and uses these representations to compute an interactive pose portion of the control device to be used for interacting with the virtual item. The reduced representation is consonant with a constrained motion of the control device.
MOTION BLUR COMPENSATION THROUGH EYE TRACKING
A user's eyes and if desired head is tracked as the user's gaze follows a moving object on a display. Motion blur of the moving object is keyed to the eye/head tracking. Motion blur of other objects in the frame also may be keyed to the eye/head tracking.
Thermopile array fusion tracking
A simultaneous location and mapping (SLAM)-enabled video game system, a user device of the video game system, and a computer-readable storage medium of the user device are disclosed. Generally, the video game system includes a video game console, a plurality of thermal beacons, and a user device communicatively coupled with the video game console. The user device includes a thermopile array, a processor, and a memory. The user device may receive thermal data from the thermopile array, the thermal data corresponding to a thermal signal emitted from a thermal beacon of the plurality of thermal beacons and detected by the thermopile array. The user device may determine, based on the thermal data, its location in 3D space, and then transmit that location to the video game system.
Multipoint SLAM capture
“Feature points” in “point clouds” that are visible to multiple respective cameras (i.e., aspects of objects imaged by the cameras) are reported via wired and/or wireless communication paths to a compositing processor which can determine whether a particular feature point “moved” a certain amount relative to another image. In this way, the compositing processor can determine, e.g., using triangulation and recognition of common features, how much movement occurred and where any particular camera was positioned when a latter image from that camera is captured. Thus, “overlap” of feature points in multiple images is used so that the system can close the loop to generate a SLAM map. The compositing processor, which may be implemented by a server or other device, generates the SLAM map by merging feature point data from multiple imaging devices.
CONTROLLER FOR COMPUTER ENTERTAINMENT SYSTEM
A controller for a computer entertainment system including a control box (104) for executing and running a computer game and causing game footage to be displayed on a screen (510—FIG. 3), said control box (104) being configured to receive control signals representative of user manipulation of said game and dynamically adjust said game footage accordingly; said controller comprising a hand-held gamepad (102) and an inertial sensor (106), separate from and configured for data communication with, said gamepad (102); said hand-held gamepad (102) comprising at least one control input device and including a processor (114—FIG. 5) for generating control signals for manipulating a game running on said control box (104) in response to user manipulation of said at least one control input device and causing said control signals to be transmitted to said control box (104); said inertial sensor (106) being configured to be mounted in or on a user-worn garment such that user movement causes corresponding movement of said inertial sensor (106); said processor (114—FIG. 5) being further configured to receive signals from said inertial sensor (106) representative of user movement, convert said signals into control signals for manipulating said computer game, said control signals being of the same format as control signals generated in response to user manipulation of a respective control input device, and cause said control signals to be transmitted to said control box (104).
CONTROLLER FOR COMPUTER ENTERTAINMENT SYSTEM
A controller for a computer entertainment system including a control box (104) for executing and running a computer game and causing game footage to be displayed on a screen (510—FIG. 3), said control box (104) being configured to receive control signals representative of user manipulation of said game and dynamically adjust said game footage accordingly; said controller comprising a hand-held gamepad (102) and an inertial sensor (106), separate from and configured for data communication with, said gamepad (102); said hand-held gamepad (102) comprising at least one control input device and including a processor (114—FIG. 5) for generating control signals for manipulating a game running on said control box (104) in response to user manipulation of said at least one control input device and causing said control signals to be transmitted to said control box (104); said inertial sensor (106) being configured to be mounted in or on a user-worn garment such that user movement causes corresponding movement of said inertial sensor (106); said processor (114—FIG. 5) being further configured to receive signals from said inertial sensor (106) representative of user movement, convert said signals into control signals for manipulating said computer game, said control signals being of the same format as control signals generated in response to user manipulation of a respective control input device, and cause said control signals to be transmitted to said control box (104).
Display Screen Front Panel of HMD for Viewing by Users Viewing the HMD Player
Method for providing image of HMD user to a non-HMD user includes, receiving a first image of a user including the user's facial features captured by an external camera when the user is not wearing a head mounted display (HMD). A second image capturing a portion of the facial features of the user when the user is wearing the HMD is received. An image overlay data is generated by mapping contours of facial features captured in the second image with contours of corresponding facial features captured in the first image. The image overlay data is forwarded to the HMD for rendering on a second display screen that is mounted on a front face of the HMD.
Dynamic Entering and Leaving of Virtual-Reality Environments Navigated by Different HMD Users
Systems and methods for processing operations for head mounted display (HMD) users to join virtual reality (VR) scenes are provided. A computer-implemented method includes providing a first perspective of a VR scene to a first HMD of a first user and receiving an indication that a second user is requesting to join the VR scene provided to the first HMD. The method further includes obtaining real-world position and orientation data of the second HMD relative to the first HMD and then providing, based on said data, a second perspective of the VR scene. The method also provides that the first and second perspectives are each controlled by respective position and orientation changes while viewing the VR scene.