Multi-human tracking system and method with single kinect for supporting mobile virtual reality application

11009942 · 2021-05-18

Assignee

Inventors

Cpc classification

International classification

Abstract

The invention discloses a multi-human tracking system and method with single Kinect for supporting mobile virtual reality applications. The system can complete the real-time tracking of users occluded in different degrees with a single Kinect capture device to ensure smooth and immersive experience of players. The method utilizes the principle that the user's shadow is not occluded when the user is occluded under certain lighting conditions, and converts the calculation of the motion of the occluded user into a problem of solving the movement of the user's shadow, and can accurately detect the position of each user, rather than just predicting the user's position, thereby actually realizing tracking.

Claims

1. A multi-user tracking method for supporting virtual reality applications in which a plurality of users move in a physical space that correspond to one or more virtual spaces in the virtual reality applications, the multi-human tracking method comprising: (1) recording a background image of the physical space with a single Kinect depth camera; (2) assuming that a number of users participating in an initialization scene is N.sub.k, a current time k=1, and a state of occluding is Occ=0, and acquiring an initial position and an initial orientation of viewpoints of the one or more users in the one or more virtual spaces in the virtual reality applications; (3) if k=1, then jumping to step (8), otherwise jumping to step (4); (4) calculating the number of users N.sub.k that the single depth camera identifies in a current image, and recording a rotation angle of each user via a terminal gyroscope worn by each user; (5) judging a current state of the number of users to determine whether the number of users has decreased by one from a previous state of the number of users such that N.sub.k=N.sub.k−1, jumping to step (6), otherwise jumping to step (7); (6) according to a state of occluding Occ, judging a state of a physical space, if Occ=0, then the physical space is in a non-occluding state, calling a non-occluding method to calculate a user's physical position, then jumping to step (8); otherwise, the physical space is in a state of continuous occluding, then calling a continuous occluding state method to calculate a physical position of the user, and jumping to step (8); (7) if N.sub.k<N.sub.k−1, then the physical space is in an occluding appearing state, calling an occluding-appearing state method to calculate the physical position of the user and setting Occ=1, jumping to step (8); otherwise the physical space is in an occluding disappearing state, calling an occluding-disappearing state method to calculate the physical position of the user and setting Occ=0, jumping to step (8); (8) mapping the calculated physical position of the user to the one or more virtual space coordinates for coordinating a spatial consistency between a physical movement of the one or more users and a virtual image viewed by the one or more users; (9) according to the user's physical position and a rotation angle of terminal gyroscope, determining the user's field of view and viewpoint; (10) changing the initial position and the initial orientation of the one or more users in the one or more virtual spaces in the virtual reality applications based on the determined position and the determined field of view and viewpoint; (11) adding one to k; (12) determining whether a game in a respective virtual reality application is over, otherwise jumping to step (3).

2. The multi-user tracking method according to claim 1, wherein when the physical space is determined as being in a non-occluding state in step (6),the user position calculation method comprises the following steps: (6-1) recording the user's physical position information as P.sub.k={p.sub.k.sup.i|i=1, 2, . . . Nu}, and a corresponding user number identification (ID)UID={uID.sup.i|i=1,2, . . . Nu}, where p.sub.k.sup.i represents physical position information of the i-th user at k moment, uID.sup.i represents the ID of the user i, and Nu represents the number of users at a current moment; (6-2) updating in real time the corresponding user's physical position according to the user number ID of each user such that the ID of the user i is u currently, the position is pos, then
u=uID.sup.j(j=1,2, . . . Nu), then p.sub.i.sup.j=pos.

3. The multi-user tracking method according to claim 1, wherein when the physical space is determined as being in a occluding-continuing state in step (6), the user position calculation method comprises the following steps: determining an occluded user's position by tracking a movement of a shadow of an occluded user based on skeleton information that is obtained through the single depth camera, and gyroscope sensor data.

4. The multi-user tracking method according to claim 3, further comprising the steps of: assuming that the occluded user is p.sup.j at time k, and the user blocking p.sup.j p.sup.i, determining a search rectangle area of a shadow of the user p.sup.j based on the physical position of the user p.sup.i obtained by the single depth camera, the position of a light source, and the physical relationship between p.sup.i and p.sup.j, determining a difference between the foot position of the user and the starting position of the shadow search box, subtracting the real-time color image captured by the single depth camera from the acquired background image to obtain a silhouette of the shadow of the user, calculating a center position of the shadow based on the obtained silhouette of the shadow of the user, determining whether a movement direction of the occluded user p.sup.j has changed based on the gyroscope sensor data, determining whether a physical position of the occluded user p.sup.j has moved based on a change in the shadow position of the adjacent frame, and determining the occluded user's physical position based on the determined movement direction and the determined physical position.

5. The multi-user tracking method according to claim 4, further comprising the steps of: changing a size of a search rectangular box in real time based on the physical position of the user p.sup.j and the light source, determining a difference between a color image captured in real time and a background image obtained by initialization in a shadow search rectangular box, if the difference is greater than a preset threshold, the generated image is identified as a foreground image, and if the difference is not greater than the preset threshold, there is no user shadow in the an area of the search rectangular box.

6. The multi-user tracking method according to claim 1, wherein when the physical space is determined as being in the occluding-appearing state in step (7), the user position calculation method comprises the following steps: assuming that the user's ID information detected at time k is curID, detecting the user number information set UID at k−1, searching for the occluded user in the physical space, and determining the movement direction of the occluded user p.sup.j based on the gyro sensor data, and calculating the physical position of the occluded user at time k based on the determined movement direction and a motion amplitude.

7. The multi-user tracking method according to claim 1, wherein when the physical space is determined as being in the occluding-disappearing state in step (7), the user position calculation method comprises performing the following steps: (7-2-1) selecting a position calculation method according to the user's occluded mark, and if: (i) the user is not occluded, jumping to step (7-2-2), or (ii) the user appears again after the occluding, jumping to step (7-2-3); (7-2-2) updating in real time the corresponding user's position based on the number of each user information; (7-2-3) updating the number and position information after the user reappears.

8. The multi-user tracking method according to claim 1, wherein the mapping of the calculated user's position to virtual space coordinates in step (8), includes the following steps: (8-1) after positioning the single depth camera, marking a tracking area of the virtual space and measuring four corner positions of the tracking area; (8-2) based on a position of each four corner points in the virtual space, calculating a transformation matrix M of a space coordinate system relative to a virtual scene coordinate system; (8-3) at current time k, determining that the position of user j is (posx.sub.k.sup.j, posz.sub.k.sup.j), and that the corresponding position of the user in the virtual scene is (vposx.sub.k.sup.j, vposz.sub.k.sup.j)=(posx.sub.k.sup.j, posz.sub.k.sup.j)*M.

9. The multi-user tracking method according to claim 1, wherein in step (9), a mobile phone is provided in the virtual reality headset, and the virtual reality headset is configured to display a virtual 3D scene, the mobile phone is configured to obtain data of a user's head rotation and data of the user's physical position, and the determination of the user's field of view and viewpoint is made based on the obtained data of the user's head rotation and the obtained data of the user's physical position.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) The accompanying drawings that form a part of the present application are to provide a further understanding of the present application. The illustrative embodiments and illustrations of the present application are intended to be illustrative of the present application and will not constitute an improper limit of the present application

(2) FIG. 1 is a hardware structural diagram of the present invention;

(3) FIG. 2 is a system architecture diagram of the present invention;

(4) FIG. 3 is a flow chart of the method of the present invention;

(5) FIG. 4(a) and FIG. 4(b) are examples of the present invention used for a double-person virtual reality maze game.

DETAILED DESCRIPTION OF THE EMBODIMENTS

(6) The invention will now be further described with reference to the accompanying drawings and embodiments.

(7) It should be noted that, the following detailed description is illustrative and is intended to provide further description of the present application, and unless otherwise indicated, all the technical and scientific terms used herein have the same meaning as commonly understood by those of ordinary skill in the corresponding field.

(8) It should be noted that the terminology used herein is for the purpose of describing the specific implementation only and is not intended to limit the exemplary embodiments according to the present application. As used herein, unless the context clearly dictates otherwise, the singular forms are intended to include the plural. In addition, it should be understood that, when terms “comprise” and/or “include” are used in the description, they indicate the existence of features, steps, operations, equipment, components and/or the combination thereof.

(9) As described in the background art, most present methods use the user detection model to detect the user and then track the user. But for the seriously or even fully occluded user, effective tracking cannot be realized. In order to solve the technical problem above, the present application proposes a single Kinect-based multi-user tracking system which supports mobile virtual reality application, and designs an occluding-handling method which integrates multi-sensing data clues to process the occluding, and track in real time the position of users who are occluded in different degrees. The method can be divided into four levels:

(10) The first level, information acquisition layer. According to the input information, acquiring the number of users, user skeletal data, gyroscope data and user shadow information.

(11) The second level, the information analysis layer. Judging the existing state of the system according to the information obtained. The present invention divides the system state into four categories: non-occluding state, the occluding-appearing state, the occluding-continuing state, the occluding-disappearing state.

(12) The third level, the decision-making selection layer is used to design the different tracking methods according to the above-mentioned four kinds of system states, and the different tracking methods are invoked according to the different states of the system.

(13) The fourth level, application layer. Based on the multi-clue decision fusion method, the position calculation of the users who are occluded in different degrees is realized, and the obtained physical coordinates of the user are mapped to the virtual space coordinates to ensure the consistency of the user's real feeling in movement and the image seen in the virtual device.

(14) The method incorporates various types of information, such as mobile phone gyroscope data, color image information, depth image information, to compensate the lack of data caused by occluding. For the occluding-continuing state, the present invention proposes a new detection model based on the shadow of the user. Under certain lighting conditions, when the user is occluded, the user's shadow will not be occluded. Based on this, we will convert solving the occluding of the user's movement into a solution to calculate the movement of the user's shadow. The position information of the occluded user is calculated by establishing a motion model and a direction model of the occluded user's shadow, which can detect and calculate the user's position where the occluding occurs seriously, rather than estimating the user's position, as compared to the previous method.

(15) A multi-human tracking system and method with single Kinect for supporting mobile virtual reality applications is proposed, wherein the system includes a user tracking subsystem and a user experience subsystem. The user tracking subsystem includes a user image capture module, a terminal sensor information capture module, a system current state judgment module, a positioning realizing module and a virtual-or-real position mapping module; the user experience subsystem includes a three-dimensional display module and an interactive module interacted with the virtual space.

(16) The user image capturing module is used for acquiring the user's color image information and the identified user skeletal data by the somatosensory camera Kinect, and providing input data for the system current state judgment module;

(17) The terminal sensor information capturing module obtains the rotation information of the mobile phone gyroscope so as to obtain the user's orientation information and provide the input data for the user positioning realizing module and the stereoscopic display module;

(18) The system current state judgment module judges the status of the current system by using the information provided by the user image capturing module and determining the state of the current system referring to the number of users identified at two adjacent moments: the non-occluding state, the occluding-appearing state, the occluding-continuing state, the occluding-disappearing state, to provide the basis for the positioning realizing module;

(19) The positioning realizing module realizes the calculation of the user's position by selecting different tracking algorithms according to the current state of the system;

(20) The virtual-or-real position mapping module is used to map the physical coordinates of the user calculated by the user realizing module to the virtual space coordinates to ensure the spatial consistency between the real feeling of the user movement and the image seen in the virtual device;

(21) Through the three-dimensional display module, the user sees things showing a three-dimensional sense through the headset virtual reality glasses. Then according to the captured head rotation captured by mobile phone sensor information capture module and the position of the user obtained by the virtual-or-real position mapping module, the user's field of view (FOV) and viewpoints are tracked, to determine the current field of view (FOV) of the target, and the position and orientation of the viewpoints.

(22) The interactive module interacted with the virtual space realizes the interaction between the user and the virtual object, determines the instruction issued to the virtual object, explains it and gives the corresponding feedback result, and presents the virtual world scene to the user through the virtual reality glasses.

(23) A single-Kinect-based multi-user tracking method that supports mobile virtual reality games includes the following steps:

(24) (1) opening the Kinect capture device, recording the background image information, and connecting the terminal with the capture device;

(25) (2) assuming that the number of users participating in the initialization scene is N.sub.k, the current time k=1, whether or not occluding happens in the system Tag Occ=0;

(26) (3) if k=1, then jumping to step (8), otherwise jumping to step (4);

(27) (4) calculating the number of users N.sub.k that Kinect can identify currently, and recording the rotation angle of the terminal gyroscope;

(28) (5) judging the current state of the system according to the number of users identified at the adjacent time, if N.sub.k=N.sub.k−1, jumping to step (6), otherwise jumping to step (7);

(29) (6) according to the tag of occluding Occ, judging the state of the system, if Occ=0, then the system is in a non-occluding state, calling the non-occluding method to calculate the user's position, then jumping to step (8); otherwise, the system is in the state of continuous occluding, then calling the continuous occluding state method to calculate the position of the user, and jumping to step (8);

(30) (7) if N.sub.k<N.sub.k−1, then the system is in the occluding appearing state, calling a occluding-appearing state method to calculate the position of the user and setting Occ=1, jumping to step (8); otherwise the system is in the occluding disappearing state, calling the occluding-disappearing state method to calculate the position of the user and setting Occ=0, jumping to step (8);

(31) (8) mapping the calculated user's position to the virtual space coordinates to ensure the spatial consistency between the real feeling of the user in movement and the image seen in the virtual device;

(32) (9) according to the user's position and the rotation angle of terminal gyroscope got from (8), rendering the user's field of view and viewpoint, and realizing immersive experience through virtual reality glasses;

(33) (10) adding one to k;

(34) (11) determining whether the game is over, if so, finishing the game, otherwise jumping to step (3).

(35) The user position calculation method used in step (6) when the system is in a non-occluding state includes the following steps:

(36) (6-1-1) Initialization phase. According to the skeleton information provided by the Kinect SDK, recording the user position information P.sub.k={p.sub.k.sup.i|i=1, 2, . . . , Nu}, and the corresponding user number ID UID={uID.sup.i|i=1, 2, . . . Nu}. Where p.sub.k.sup.j represents the position information of the i-th user at k moment, uID.sup.i represents the ID of the user i, and Nu represents the number of users at the existing moment.

(37) (6-1-2) According to the ID of each user, updating in real time the corresponding user's position. Assuming that the ID of the user i is u currently, the position is pos, if u=uID.sup.j(j=1, 2, . . . Nu), then p.sub.k.sup.j=pos.

(38) In step (6), in the user's position calculation method used when the system is in a occluding-continuing state, a tracking method is designed from the perspective of using the user's shadow, using the movement of the shadow instead of the movement of the occluded user, the integration and calculation of Kinect obtained skeleton information, color image user shadow information and sensor data (gyroscope data) to complete the occluded user's position. The method specifically includes the following steps:

(39) Assuming that the occluded user is p.sup.j at time k, and the user occluding p.sup.j is p.sup.i.

(40) (6-2-1) According to the position of the user p.sup.i obtained by Kinect, the position of the light source, and the physical relationship between p.sup.i and p.sup.j, the search rectangle area of the user P.sup.j's shadow is determined, and the rectangular area's length is h and the width is w.

(41) We assign the point P (posfx.sup.j, posfz.sup.i) as the foot of P.sup.i, and the point B (possx, possz) represents the beginning point of the shadow-searching area. Then we have:
possx=posfx.sup.i+disx
possz=posfz.sup.i+disz

(42) Wherein (disx, disz) represents the relative positional relationship between points A and B, disz=0,

(43) disx = { 120 posx i < δ 100 posx i δ

(44) According to the difference of position between the user p.sup.i and the light source, disx is of a different value, δ=0.0. Wherein (posx.sup.i, posz.sup.i) represents the position information of user p.sup.i.

(45) In addition, depending on the difference of position of the user P.sup.i and the light source, the size of the search rectangular box is also changed in real time.

(46) { h = 400 , w = 450 posx i < δ h = 400 , w = 320 posx i δ

(47) (6-2-2) the color images captured by Kinect in real time are subtracted by the background image to get the shadow of the occluded user. If the difference is greater than the preset threshold, the generated image is considered to be a foreground image and marked as black. If the difference is not greater than the pre-set threshold, then there is no user shadow in the search area. Calculate the center position of the shadow based on the resulting shadow of the user's shadow cPos(cposx,cposz):

(48) cposx = .Math. C .Math. x .Math. C .Math. cposz = .Math. C .Math. z .Math. C .Math.

(49) Wherein C represents the set of points that belong to the shadow.

(50) (6-2-3) Determine the movement of the occluded user P.sup.j's direction according to the mobile phone gyro sensor data:

(51) a = { 1 O j ( t 1 , t 2 ) - 1 O j ( t 3 , t 4 ) 0 others , b = { 1 O j ( t 5 , t 6 ) .Math. ( t 7 , t 8 ) - 1 O j ( t 9 , t 10 ) 0 others

(52) a, b is the movement mark of the user from the front to the rear and from the left to the right. Among them, the parameters t1, t2, t3, t4, t5, t6, t7, t8, t9, t10 are the reference values of mobile phone gyroscope rotation directions we set, t1=70, t2=110, t3=70, t4=110, t5=0, t6=20, t7=320, t8=360, t9=160, t10=200. The user's movement direction is divided into front, rear, left and right directions, when the user gyroscope data belongs to a range of motion, we think that the user moves towards one direction.

(53) (6-2-4) Determine whether the occluded user is moving according to the shadow position changes of the adjacent frame:

(54) fmx k = { 1 .Math. cposx k - cposx k - k 0 .Math. > θ 1 0 others fmz k = { 1 .Math. cposz k - cposz k - k 0 .Math. > θ 2 0 others

(55) fmx.sub.k, fmz.sub.k indicate the motion mark of the user's shadow. Wherein (cposx.sub.k, cposz.sub.k), (cposx.sub.k−k0, cposz.sub.k−k0) represent the occluded user's shadow at k time, k−k.sub.0 time position information, k.sub.0=10, θ.sub.1=3, θ.sub.2=6.

(56) (6-2-5) Calculate the position of the occluded user:
posx.sub.k.sup.j=posx.sub.k−1.sup.j+fmx.sub.k*a*S
posz.sub.k.sup.j=posz.sub.k−1.sup.j+fmz.sub.k*b*S
(posx.sub.k.sup.j, posz.sub.k.sup.j) is the position of the occluded user at the time k, S is the user's movement stride, S=0.01.

(57) The user position calculation method used in step (7) when the system is in the occluding-appearing state includes the following steps:

(58) (7-1-1) Assume that the user ID information detected at time k is curID, the user ID information set UID at k−1 is searched for the occluded user. If uID.sup.j∈UID &uID.sup.j .Math. curID(j=1, 2, . . . Nu), then the occluded user is p.sup.j, his ID is uID.sup.j, and the occluding mark f.sup.j=1.

(59) (7-1-2) Determine the movement of the user P.sup.j direction according to the mobile phone gyro sensor data:

(60) a = { 1 O j ( t 1 , t 2 ) - 1 O j ( t 3 , t 4 ) 0 others , b = { 1 O j ( t 5 , t 6 ) .Math. ( t 7 , t 8 ) - 1 O j ( t 9 , t 10 ) 0 others

(61) a, b are the movement marks of the user from the front to the rear and from the left to the right. Among them, the parameters t1, t2, t3, t4, t5, t6, t7, t8, t9, t10 are the reference values of mobile phone gyroscope rotation direction we set, t1=70, t2=110, t3=70, t4=110, t6=20, t8=360, t9=160, t10=200. The user's movement direction is divided into front, rear, left and right directions, when the user gyroscope data belongs to a range of motion, we think that the user moves towards a certain direction.

(62) (7-1-3) Calculate the position at which the user k is occluded (posx.sub.k.sup.j, posz.sub.k.sup.j):
posx.sub.k.sup.j=posx.sub.k−1.sup.j+fmx.sub.k*a*S
posz.sub.k.sup.j=posz.sub.k−1.sup.j+fmz.sub.k*b*S

(63) And set the mask occurrence marker Occ=1. S represents the user's movement stride, S=0.01.

(64) The user position calculation method used in step (7) when the system is in the occluding-disappearing state includes the following steps:

(65) (7-2-1) Different position calculation methods are selected according to the user's occluding mark. If the user is occluded, then f.sup.i=0, if the user is not occluded, then jump to step (7-2-2); if the user is occluded then f.sup.i=1, then the user appears again after the occluding, then jump to step (7-2-3);

(66) (7-2-2) According to the number of each user information, update in real time the corresponding user's position. Assume that the ID of the current user i is u, the position is pos, if u=uID.sup.j(j=1, 2, . . . Nu), then p.sub.k.sup.j=pos.

(67) (7-2-3) Update the ID and position information after the user reappears. Assume that the ID information after the user is reoccupied is v, the position is apps, then uID.sup.i=v, p.sub.k.sup.i=apos. At the same time, f.sup.i=0, Occ=0, mark occluding has disappeared.

(68) Step (8) map the calculated user position to the virtual space coordinates, including the following steps:

(69) (8-1) After Kinect is arranged, mark the tracking area and measure the four corner positions of the tracking area.

(70) (8-2) According to the position of four corner points in the virtual scene space, the transformation matrix M of the Kinect space coordinate system relative to the virtual scene coordinate system is calculated.

(71) (8-3) Assume at the current time k, the position of user j is (posx.sub.k.sup.j, posz.sub.k.sup.j), the corresponding position of the user in the virtual scene is (vposx.sub.k.sup.j, vposz.sub.k.sup.j)=(posx.sub.k.sup.j, posz.sub.k.sup.j)*M.

(72) In step (9), the mobile phone is placed in the virtual reality glasses, and the user can see the three-dimensional scene by the virtual reality glasses. According to the user's head rotation captured by the mobile phone sensor information capturing module, the user position obtained by the virtual-reality position mapping module, the tracking of user's field of view and point of view are tracked to determine the target in the current field of view, and the user's point of view and orientation.

(73) In a typical embodiment of the present application, as FIG. 1 shows, the key equipment required for the present invention includes mobile virtual reality glasses, Kinect sensors, and auxiliary light sources. Users wear virtual reality glasses, and its screen generation and rendering are handled by connected mobile phones. Kinect can map your position and gesture into the virtual world, in order to achieve immersive experience.

(74) As shown in FIG. 2, the present application proposes a single Kinect-based multi-user tracking system which supports mobile virtual reality application, and designs a decision fusion method based on multiple clues to track the position of users occluded in different degrees. The method can be divided into four levels:

(75) The first level, information acquisition layer. According to the input information, acquiring the number of users, user skeletal data, gyroscope data and user shadow information.

(76) The second level, the information analysis layer. Judging the existing state of the system according to the information obtained. The present invention divides the system state into four categories: non-occluding state, the occluding-appearing state, the occluding-continuing state, the occluding-disappearing state.

(77) The third level, the decision-making selection layer is used to design the different tracking methods according to the above-mentioned four kinds of system states, and the different tracking methods are invoked according to the different states of the system.

(78) The fourth level, application layer. Based on the multi-clue decision fusion method, the position calculation of the users who are occluded in different degrees is realized, and the obtained physical coordinates of the user are mapped to the virtual space coordinates to ensure the consistency of the user's real feeling in movement and the image seen in the virtual device.

(79) The method incorporates various types of information, such as mobile phone gyroscope data, color image information, depth image information, to compensate the lack of data caused by occluding.

(80) As FIG. 3 shows, the operation of the present invention includes the following steps:

(81) (1) opening the Kinect capture device, recording the background image information, and connecting the terminal with the capture device;

(82) (2) assuming that the number of users participating in the initialization scene is N.sub.k, the current time k=1, whether or not occluding happens in the system Tag Occ=0;

(83) (3) if k=1, then jumping to step (8), otherwise jumping to step (4);

(84) (4) calculating the number of users N.sub.k that Kinect can identify currently, and recording the rotation angle of the terminal gyroscope;

(85) (5) judging the current state of the system according to the number of users identified at the adjacent time, if N.sub.k=N.sub.k−1, jumping to step (6), otherwise jumping to step (7);

(86) (6) according to the tag of occluding Occ, judging the state of the system, if Occ=0, then the system is in a non-occluding state, calling the non-occluding method to calculate the user's position, then jumping to step (8); otherwise, the system is in the state of continuous occluding, then calling the continuous occluding state method to calculate the position of the user, and jumping to step (8);

(87) (7) if N.sub.k<N.sub.k−1, then the system is in the occluding appearing state, calling a occluding-appearing state method to calculate the position of the user and setting Occ=1, jumping to step (8); otherwise the system is in the occluding disappearing state, calling the occluding-disappearing state method to calculate the position of the user and setting Occ=0, jumping to step (8);

(88) (8) mapping the calculated user's position to the virtual space coordinates to ensure the spatial consistency between the real feeling of the user in movement and the image seen in the virtual device;

(89) (9) according to the user's position and the rotation angle of terminal gyroscope got from (8), rendering the user's field of view and viewpoint, and realizing immersive experience through virtual reality glasses;

(90) (10) k=k+1;

(91) (11) determining whether the game is over, if so, finishing the game, otherwise jumping to step (3).

(92) FIG. 4(a) and FIG. 4(b) are examples of the present invention applied to a two-person virtual reality maze game. When the occluding happens, by adopting the present invention, the position of the occluded player can be continuously tracked, to ensure the smoothness and immersive feeling of the player experience. FIG. 4(a) shows the shadow movement of the occluded user, and FIG. 4(b) shows the movement of the user in the virtual scene seen by the user through the virtual reality glasses.

(93) The description above is merely a preferred embodiment of the present application and is not intended to limit the present application. It will be apparent to those skilled in the art that various changes and modifications can be made herein. Any modifications, equivalent substitutions, improvements, and the like made within the spirit and principle of the present application are intended to be included within the protection scope of the present application.

(94) Although the specific embodiments of the present invention have been described with reference to the accompanying drawings, it is not intended to limit the protection scope of the invention. It would be understood by those skilled in the art that various modifications or variations made without any creative effort on the basis of the technical solution of the present invention are within the protection scope of the present invention.