METHOD, DEVICE AND SYSTEM FOR IMMERSING A USER IN A VIRTUAL REALITY APPLICATION
20230236662 · 2023-07-27
Inventors
- Richard Guignon (CHÅTILLON CEDCX, FR)
- Sébastien Poivre (CHÂTILLON CEDEX, FR)
- Gildas Belay (CHÂTILLON CEDEX, FR)
Cpc classification
G06F3/011
PHYSICS
G02B27/0179
PHYSICS
G02B2027/0187
PHYSICS
G02B27/0093
PHYSICS
International classification
Abstract
A method which makes it possible to immerse a user in a virtual reality application is described. The method includes controlling the movement of an avatar of a user in a synthetic scene and controlling the movements of a video acquisition system depending on the detected movements of the head of the user. The method also includes playing back content on a screen of a virtual reality helmet of the user, the content being a video stream acquired by the video acquisition system if the position of the eyes of the avatar is in a synthetic object whose volume can be included in a volume in which the video acquisition system is likely to move.
Claims
1. A method for immersing a user in a virtual reality application, the method including: monitoring a displacement of an avatar of a user in a synthetic scene; a step of monitoring displacements of a video acquisition system as a function of detected displacements of a head of the user, the orientation of the avatar being monitored as a function of the orientation of the head of the user; and upon a determination that a position of the eyes of the avatar is in a synthetic object whose volume is able to be comprised in a volume in which said video acquisition system is likely to move, rendering a content on a screen of a virtual reality helmet of the user, said content being a video stream acquired by said video acquisition system .
2. The method of claim 1, further comprising, upon a determination that the position of the eyes of the user is outside the synthetic object, rendering the synthetic scene on said screen .
3. The method of claim 1, further comprising, upon a determination that the position of the eyes of the avatar is close to the limit of the synthetic object, rendering on said screen a transition content between the video stream acquired by said system and the synthetic scene .
4. The method of claim 3, wherein the transition content is a synthetic image of the color of said synthetic object.
5. The method of claim 3, wherein the transition content is a fade between the video stream acquired by said video acquisition system and the synthetic scene.
6. A device for immersing a user in a virtual reality application, the device including: at least one processor; and at least one non-transitory computer readable medium comprising instructions stored thereon which when executed by the at least one processor configure the device to implement a method comprising: monitoring a displacement of an avatar of a user in a synthetic scene; monitoring displacements of a video acquisition system as a function of detected displacements of a head of said user, the orientation of the avatar being monitored as a function of the orientation of the head of the user ; and rendering a content on a screen of a virtual reality helmet of the user, said content being a video stream acquired by said video acquisition system if a position of the eyes of the avatar is in a synthetic object whose volume is able to be comprised in a volume in which said video acquisition system is likely to move.
7. A system including: sensors for detecting displacements of a head of a user, a video acquisition system, a virtual reality helmet, and the immersion device of claim 6 configured to: monitor displacements of said video acquisition system and, render a content on a screen of said virtual reality helmet, said content being a video stream acquired by said video acquisition system if the position of the eyes of an avatar of the user monitored by said device is in a synthetic object whose volume can be comprised in a volume in which said video acquisition system is likely to move.
8. The system of claim 7 wherein said video acquisition system (CAM) includes two cameras.
9. A non-transitory computer readable medium having stored thereon instructions which, when executed by a processor, cause the processor to implement the method of claim 1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0036] Other characteristics and advantages of the present invention will become apparent from the description given below, with reference to the appended drawings that illustrate one exemplary embodiment without any limitation. On the figures:
[0037] [
[0038] [
[0039]
[0040]
[0041]
[0042]
[0043]
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064]
DETAILED DESCRIPTION OF THE PARTICULAR EMBODIMENTS
[0065]
[0066] A marker linked to the video acquisition system is denoted REP.sub.CAM.
[0067] In one embodiment represented in
[0068] This system CAM can be remotely piloted via a network NET to track the movements of the head of a user USR. This network is for example a high-performance network of the 5G type with very low latency and very high speed allowing the transmission of the video streams and the commands of the movements of the articulated arm in real time.
[0069] For this purpose, and as represented in
[0070] If the virtual reality helmet HLM is able to track the user’s gaze, the focus point of each camera CAM.sub.1, CAM.sub.2 can be varied.
[0071]
[0072] An avatar AVT of the user USR can move around in the synthetic scene SS.
[0073] In the embodiment described here with reference to
[0074] These volumes could have another same shape, for example a spherical or an ovoid shape.
[0075] These volumes can also be of different shapes. For example, the displacement volume VOL.sub.CAM of the video acquisition system can be parallelepipedic and the virtual volume VV can be spherical.
[0076] The volume of the synthetic object VV can be comprised in the volume VOL.sub.CAM in which said video acquisition system CAM is likely to move.
[0077] The dimensions of the virtual volume VV are slightly smaller than those of the volume VOL.sub.CAM in which the video acquisition system CAM can move. The virtual volume VV can be part of the volume VOL.sub.CAM. This characteristic also allows preventing the robotic arm from being in abutment when the eyes of the avatar AVT reach the limit of the virtual volume W.
[0078] The position and the orientation of the eyes of the avatar AVT in a marker REP.sub.EVV linked to the virtual volume are denoted respectively POS.sub.AVT and OR.sub.AVT.
[0079] The marker REP.sub.CAM linked to the video acquisition system and the marker REP.sub.EVV linked to the virtual volume VV are matched so that any point of the virtual volume VV corresponds to a point of the displacement volume VOL.sub.CAM of the acquisition system CAM.
[0080] In this example, the faces of the volume VOL.sub.CAM are denoted F.sub.i and the corresponding faces of the volume VV are denoted f.sub.i.
[0081] In the same way, the marker REP.sub.EVV linked to the virtual volume VV and the marker REP.sub.CAP linked to the sensor network CAP are matched.
[0082] Thus, the synthetic object VV allows materializing the volume covered by the acquisition system displacement and associating transmission start and end steps therewith, making the transmission more natural, more comfortable for the user who leaves the synthetic object when the acquisition system reaches the limit of its displacement volume, thus ending the transmission of the video provided by the acquisition system.
[0083]
[0084] It will be assumed that the user turns on his virtual reality helmet during a step E05.
[0085] The synthetic scene SS of
[0086] The video acquisition system CAM is turned off, its position can be predefined or arbitrary. It does not transmit video streams.
[0087] In the embodiment described here, the immersion method includes a process P.sub.CAP to obtain (step E10) the current position POS.sub.HD and the current orientation OR.sub.HD of the head of the user USR in the marker REP.sub.CAP linked to the sensor network.
[0088] In the embodiment described here, the immersion method P.sub.IMM includes a process P.sub.AVT to monitor the position POS.sub.AVT and the orientation OR.sub.AVT of the eyes of the avatar AVT in the marker REP.sub.EVV linked to the virtual volume VV. This process P.sub.AVT is called avatar AVT monitoring process.
[0089] In the embodiment described here, when the virtual reality helmet is turned on (step E05), the avatar monitoring process P.sub.AVT positions and orients the eyes of the avatar AVT to an original position POS.sub.AVT,.sub.ORIG and according to a predetermined original orientation OR.sub.AVT,.sub.ORIG (step E15), outside the virtual volume VV.
[0090] Then, during a general step E20, the avatar monitoring process P.sub.AVT monitors the position POS.sub.AVT and the orientation OR.sub.AVT of the eyes of the avatar AVT in the marker REP.sub.EVV as a function of the position POS.sub.HD and orientation OR.sub.HD of the head of the user USR in the marker REP.sub.CAP.
[0091] In the embodiment described here, the immersion method P.sub.IMM includes a process P.sub.VID for monitoring the video stream of the video acquisition system CAM. This process P.sub.VID is called video monitoring process.
[0092] In the embodiment described here, the video monitoring process P.sub.VID includes a step to determine whether the current position of the eyes of the avatar POS.sub.AVT is in the virtual volume W. If this is the case, the video monitoring process P.sub.VID sends (step E40) a monitoring message POS to the video acquisition system CAM so that its sends to the virtual reality helmet the video stream acquired by the camera CAM.
[0093] In the embodiment described here, the video monitoring process P.sub.VID sends (step E45) a monitoring message POS to the video acquisition system CAM so that it stops sending the video stream acquired by the camera CAM to the virtual reality helmet when the eyes of the avatar POS.sub.AVT leave and move away from the limit of the virtual volume VV by a determined distance.
[0094] In the embodiment described here, the immersion method P.sub.IMM includes a process P.sub.CAM for monitoring the position of the video acquisition system CAM. This process P.sub.CAM is called camera monitoring process.
[0095] In the embodiment described here, when the virtual reality helmet is turned on (step E05), the video monitoring process P.sub.VID sends (step E25) a monitoring message POS to the video acquisition system CAM so that the robotic arm positions and orients this system CAM according to an original position POS.sub.CAM,.sub.ORIG and according to an original orientation OR.sub.CAM, ORIG determined in a marker REP.sub.CAM linked to the video acquisition system.
[0096] In the embodiment described here, the camera monitoring process P.sub.CAM includes a step to determine whether the current position of the eyes of the avatar POS.sub.AVT is close and outside the virtual volume W. If this is the case, the camera monitoring process P.sub.CAM sends (step E30) a monitoring message POS to the video acquisition system CAM to turn it on and so that the robotic arm places it at the limit of the volume VOL.sub.CAM according to a position and an orientation corresponding to those of the eyes of the avatar AVT. For example, if the avatar AVT is outside and in the vicinity of a northern limit of the virtual volume VV, oriented along a South-West direction, the video acquisition system is positioned at the northern limit of the volume VOL.sub.CAM and oriented along the South-West direction.
[0097] As a variant, the camera monitoring process P.sub.CAM sends (step E30) a monitoring message POS to the video acquisition system CAM so that it places itself at the limit of the volume VOL.sub.CAM according to predetermined position and orientation.
[0098] In the embodiment described here, the camera monitoring process P.sub.CAM includes a step to determine whether the current position of the eyes of the avatar POS.sub.AVT is in the virtual volume W. If this is the case, the camera monitoring process P.sub.CAM sends (step E35) a monitoring message to the video acquisition system CAM so that the robotic arm modifies its position POS.sub.CAM and its orientation OR.sub.CAM in the marker R.sub.CAM as a function of the position POS.sub.AVT and of the orientation OR.sub.AVT of the eyes of the avatar in the marker REP.sub.EVV.
[0099] As long as the eyes of the avatar AVT are in the virtual volume VV, the robotic arm adapts the position and the orientation of the video acquisition system CAM based on the received information.
[0100] If the eyes of the avatar AVT leave and move away from the virtual volume by a determined distance, the camera monitoring process P.sub.CAM turns off the video acquisition system CAM and ends the communication established with this system CAM.
[0101] In the embodiment described here, the immersion method P.sub.IMM includes a process P.sub.AFF for rendering a content on the screen SCR of the virtual reality helmet. This process P.sub.AFF is called rendering process.
[0102] In the embodiment described here, the rendering process P.sub.AFF includes a step to determine whether the current position of the eyes of the avatar POS.sub.AVT is far from the virtual volume VV. If this is the case, the rendering process P.sub.AFF displays (step E50) the synthetic scene SS.
[0103] In the embodiment described here, the rendering process P.sub.AFF includes a step to determine whether the current position of the eyes of the avatar POS.sub.AVT is in the virtual volume VV and relatively far from the limit of this volume. If this is the case, the rendering process P.sub.AFF displays (step E55) the video stream received from the video acquisition system.
[0104] In a known manner, if the video acquisition system consists of two cameras, each eye receives the stream from the corresponding camera.
[0105] In the embodiment described here, if the current position of the eyes of the avatar POS.sub.AVT is outside or inside the virtual volume VV and close to the limit of this volume, then the rendering process P.sub.AFF displays (step E60) a transition content between the video stream received from the video acquisition system and the representation of the virtual volume VV, for example over the entire screen SCR.
[0106] This transition content can for example be a standby synthetic image of the color of the virtual volume VV or a fade of the synthetic scene SS and of the video stream received from the acquisition system CAM.
[0107]
[0108] Consequently, and as represented in
[0109]
[0110]
[0111] Consequently, and as represented in
[0112]
[0113]
[0114] Consequently, and as represented in
[0115]
[0116]
[0117] Consequently, and as represented in
[0118]
[0119]
[0120] In the embodiment described here, and as represented in
[0121]
[0122]
[0123] This device includes: [0124] a module MCA for monitoring the displacement of an avatar AVT of a user in a synthetic scene; [0125] a module MCV for monitoring the displacements of a video acquisition system as a function of the detected displacements of the head of said user; and [0126] a module MRC for rendering a content on a screen SCR of a virtual reality helmet of the user USR, said content being a video stream acquired by said video acquisition system if the position of the eyes of the avatar is in a synthetic object whose volume can be comprised in a volume in which said video acquisition system is likely to move.
[0127] This device DIMM can have the hardware architecture of a computer as represented in
[0128] This computer ORD includes in particular a processor 10, a random access memory of the RAM type 11, a read only memory of the ROM type 12, a rewritable non-volatile memory of the Flash type 14 and communication means 13.
[0129] The read only memory 12 constitutes a medium in accordance with one particular embodiment of the invention. This memory includes a computer program PG in accordance with one particular embodiment of the invention, which when it is executed by the processor 10, implements an immersion method in accordance with the invention and described above with reference to