METHODS AND DEVICES FOR IMMERSING A USER IN AN IMMERSIVE SCENE AND FOR PROCESSING 3D OBJECTS

20240046559 · 2024-02-08

    Inventors

    Cpc classification

    International classification

    Abstract

    A device for immersing a user in an immersive scene is described. The device sends, to a device for processing 3D objects, a request comprising a position and an orientation of an eye of an avatar of the user in the immersive scene and display parameters for displaying a 3D object in the scene. The device for processing 3D objects determines a two-dimensional image representing a point of view of the eye of the avatar on the 3D object and a position of this image in the scene. This two-dimensional image is displayed in the immersive scene in order to represent the object.

    Claims

    1. A method for immersing a user in an immersive scene, the method implemented by an immersion device, the method comprising: sending, to a device for processing three-dimensional objects, a request including a position and an orientation of an eye of an avatar of the user in the immersive scene and display parameters for an object modeled in three dimensions in said scene, the position and orientation of the eye of the avatar being those of an eye of the user or those of a fictitious point placed between two eyes of the user, the display parameters for the complex object including an identifier, a position, an orientation and a size of the complex object in a reference frame of the immersive scene; receiving a two-dimensional image and a position of said image in said scene calculated by said device for processing three-dimensional objects, said image representing a point of view of the eye of the avatar on said object; and displaying said two-dimensional image in the immersive scene at said position and by taking into account said orientation of the eye of the avatar.

    2. The method of claim 1, said method further comprising: receiving, for at least one pixel of said image, a first distance; determining a second distance corresponding to a sum of said first distance and of a distance between said position of the eye of the avatar and said pixel in the immersive scene; searching for a pixel of another synthetic object of the immersive scene located on a line defined by said position of the eye of the avatar and said pixel and at a distance from said position of the eye of the avatar smaller than said second distance; wherein displaying the two-dimensional image in the immersive scene comprises the display of said pixel of said image or the display of the pixel of said other synthetic object according to the result of said search.

    3. A method for processing at least one object modeled in three dimensions, the method comprising: receiving, from a device for immersing a user in an immersive scene, a request including a position and an orientation of an eye of an avatar of a user in said immersive scene and display parameters for said object in said scene, the position and orientation of the eye of the avatar being those of an eye of the user or those of a fictitious point placed between two eyes of the user, the display parameters for the complex object including an identifier, a position, an orientation and a size of the complex object in a reference frame of the immersive scene; determining a two-dimensional image and a position of said image in said scene, said image representing a point of view of the eye of the avatar on said object; and sending said two-dimensional image and said position of said image in the immersive scene to said immersion device.

    4. The method of claim 3, wherein said image includes a projection of said object on a plane perpendicular to the orientation of the eye of the avatar and centered on the position of the eye of the avatar.

    5. The method of claim 4, wherein said image is a rectangle including said projection and transparent pixels.

    6. The method of claim 3, wherein said position of the image is located: at a predetermined distance from said position of the eye of the avatar, if this distance is smaller than a distance separating the position of the eye of the avatar and that of said object; or between the position of the eye of the avatar and said position of the object in the opposite case.

    7. The method of claim 3, said method further comprising: calculating, for at least one pixel of said projection, a distance between said pixel and said object, along a direction defined by said position of the eye of the avatar and said pixel; and a step of sending said distance to said immersion device.

    8. A device for immersing a user in an immersive scene, the device comprising: a module for sending, to a device for processing three-dimensional objects, a request including a position and an orientation of an eye of an avatar of the user in the immersive scene and display parameters for an object modeled in three dimensions in said scene, the position and orientation of the eye of the avatar being those of an eye of the user or those of a fictitious point placed between two eyes of the user, the display parameters for the complex object including an identifier, a position, an orientation and a size of the complex object in a reference frame of the immersive scene; a module for receiving a two-dimensional image and a position of said image in said scene, calculated by said device for processing three-dimensional objects, said image representing a point of view of the eye of the avatar on said object; and a module for displaying said two-dimensional image in the immersive scene at said position and by taking into account said orientation of the eye of the avatar.

    9. A device for processing objects, the device configured to process at least one object modeled in three dimensions, the device comprising: a module for receiving, from a device for immersing a user in an immersive scene, a request (RQ) including a position and an orientation of an eye of an avatar of a user in said immersive scene and display parameters for said object (OC) in said scene (SI), the position and orientation of the eye of the avatar being those of an eye of the user or those of a fictitious point placed between two eyes of the user, the display parameters for the complex object including an identifier, a position, an orientation and a size of the complex object in a reference frame of the immersive scene; a module for determining a two-dimensional image and a position of said image in said scene, said image representing a point of view of the eye of the avatar on said object; and a module for sending said two-dimensional image and said position of said image in the immersive scene to said immersion device.

    10. A virtual reality system comprising: the device for immersing a user in an immersive scene of claim 8; and a device for processing objects modeled in three dimensions, the device for immersing a user in an immersive scene and the device for processing objects modeled in three dimensions interconnected by a communications network, the device for processing objects modeled in three dimensions comprising: a module for receiving, from the device for immersing a user in an immersive scene, the request including the position and the orientation of the eye of an avatar of the user in said immersive scene and display parameters for said object in said scene, the position and orientation of the eye of the avatar being those of an eye of the user or those of a fictitious point placed between two eyes of the user, the display parameters for the complex object including the identifier, the position, the orientation and the size of the complex object in the reference frame of the immersive scene; a module for determining the two-dimensional image and the position of said image in said scene, said image representing the point of view of the eye of the avatar on said object; and a module for sending said two-dimensional image and said position of said image in the immersive scene to said immersion device.

    11. A non-transitory computer-readable medium having stored thereon instructions, which, when executed by a processor, cause the processor to implement the method of claim 1.

    12. A non-transitory computer-readable medium having stored thereon instructions, which, when executed by a processor, cause the processor to implement the method of claim 3.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0066] Other characteristics and advantages of the present invention will become apparent from the description given below, with reference to the appended drawings which illustrate an exemplary embodiment devoid of any limitation. On the figures:

    [0067] FIG. 1 schematically represents a virtual reality system in accordance with the invention.

    [0068] FIG. 2 schematically represents an example of hardware architecture of an immersion device in accordance with one particular embodiment of the invention;

    [0069] FIG. 3 represents an example of an immersive scene;

    [0070] FIG. 4 represents, in the form of a flowchart, the main steps of an immersion method and the main steps of a method for processing objects in accordance with one particular example of implementation of the invention;

    [0071] FIG. 5 schematically represents an example of hardware architecture of a device for processing objects in accordance with one particular embodiment of the invention;

    [0072] FIG. 6 represents an example of obtaining a two-dimensional image of an object according to one particular embodiment of the invention;

    [0073] FIG. 7 represents this two-dimensional image;

    [0074] FIG. 8 illustrates the distances of a depth matrix according to one particular embodiment of the invention;

    [0075] FIG. 9 represents the objects of an immersive scene in accordance with the invention with an avatar;

    [0076] FIG. 10 represents the objects of an immersive scene in accordance with the invention with two avatars;

    [0077] FIG. 11 represents an example of occlusion in an immersive scene in accordance with the invention;

    [0078] FIG. 12 represents the functional architecture of an immersion device in accordance with the invention; and

    [0079] FIG. 13 represents the functional architecture of a device for processing objects in accordance with the invention.

    DESCRIPTION OF EMBODIMENTS OF THE INVENTION

    [0080] FIG. 1 schematically represents a virtual reality system SYS in accordance with the invention.

    [0081] This system 10 includes a device HLM for immersing a user USR in an immersive scene SI and a device DTOC for processing objects interconnected to the immersion device by a communication network NET.

    [0082] In the embodiment described here, the immersion device HLM is a virtual reality headset.

    [0083] In the embodiment described here, the device DTOC for processing objects is comprised in a server SRV.

    [0084] FIG. 2 represents the hardware architecture of the immersion device HLM in one embodiment of the invention. This immersion device includes a screen SCR, a processor 11, a random access memory 12, a read only memory 13, a non-volatile memory 14 and a communication module 15 interconnected by a bus.

    [0085] The read only memory 13 constitutes a recording medium in accordance with the invention, readable by the processor 11 and on which a computer program PROG.sub.HLM in accordance with the invention is recorded, including instructions for the execution of the steps of the immersion method P.sub.IMM according to the invention and whose main steps will be described with reference to FIG. 4.

    [0086] This computer program PROG.sub.HLM includes a RV3D module in accordance with the state of the art for displaying, moving and manipulating in the immersive scene SI synthetic objects modeled in three-dimensions (3D objects).

    [0087] In the example of FIG. 1, the user USR is equipped with the virtual reality headset HLM and a glove GLV. It evolves in a real environment EVR in which there are one or more sensors CAM configured to determine the position and orientation of the virtual reality headset HLM and of the glove GLV.

    [0088] FIG. 3 represents an example of an immersive scene SI in which an avatar AVT of the user USR can move.

    [0089] In this example, the immersive scene SI includes three synthetic objects, namely an object TB representing a table, an object representing part of the body of the avatar AVT and an object representing a hand of the avatar AVT.

    [0090] These synthetic objects are simple within the meaning of the invention because the calculations necessary for their display, their displacement and their manipulation can be performed by the hardware elements of the immersion device HLM, in particular the processor 11, the random access memory 12 and the RV3D module.

    [0091] The immersive scene SI is projected on the screen SCR of the immersion device HLM.

    [0092] The images projected on the screen SCR are calculated in real time to adapt to the displacement of the head and therefore to the point of view of the user USR.

    [0093] Specifically, in the embodiment described here, the images are calculated at least 72 times per second to prevent the user USR from feeling the effect of kinetosis (or motion sickness).

    [0094] In the embodiment described here, the immersion method P.sub.IMM includes a process P.sub.CAP to obtain (step E10): [0095] the current position POS.sub.HLM and the current orientation OR.sub.HLM of the immersion device HLM of the user USR in the reference frame REP.sub.CAP linked to the network of sensors; and [0096] the current position POS.sub.GLV and the current orientation OR.sub.GLV of the immersion device HLM of the user USR in the reference frame REP.sub.CAP.

    [0097] In the embodiment described here, the immersion method P.sub.IMM includes a process P.sub.AVT to control: [0098] the position POS.sub.YAVT and the orientation OR.sub.YAVT of an eye of the avatar AVT; and [0099] the position POS.sub.HAVT and the orientation OR.sub.HAVT of the hand of the avatar AVT in a reference frame REP.sub.SI of the synthetic scene.

    [0100] In a manner known to those skilled in the art, the position and orientation of the eye of the avatar can correspond to those of an eye of the user. The invention can be implemented for one eye or independently for each of the two eyes of the user.

    [0101] As a variant, the eye of the avatar can correspond to a fictitious point placed between the two eyes of the user.

    [0102] In the description given below, it will be considered that the position of the eye of the avatar represents, in the immersive scene SI, a point located between the two eyes of the user USR in the real world.

    [0103] In the embodiment described here, when the immersion device HLM is turned on (step E05), the avatar control process P.sub.AVT positions and orients the eye and the hand of the avatar AVT at original positions and according to predetermined original orientations (step E15).

    [0104] Then, during a general step E20, the avatar control process P.sub.AVT controls: [0105] the position POS.sub.YAVT and the orientation OR.sub.YAVT of the eye of the avatar AVT in the reference frame REP.sub.SI according to the position POS.sub.HLM and to the orientation OR.sub.HLM of the immersion device HLM in the reference frame REP.sub.CAP; and [0106] the position POS.sub.HAVT and the orientation OR.sub.HAVT of the hand of the avatar AVT in the reference frame REP.sub.SI according to the position POS.sub.GLV and to the orientation OR.sub.GLV of the glove GLV in the reference frame REP.sub.CAP.

    [0107] The immersion method P.sub.IMM has a process P.sub.AFOS to display 3D synthetic objects in the immersive scene SI.

    [0108] In accordance with the invention, the immersion device HLM operates a different processing depending on whether the 3D object to be displayed is a simple synthetic object or a complex synthetic object.

    [0109] As indicated previously, the simple synthetic objects 3D can be processed locally by the immersion device HLM, more precisely by its RV3D module.

    [0110] On the contrary, a complex 3D synthetic object OC is too complex to be able to be processed by the hardware and software elements of the immersion device HLM, in particular the processor 11 and the random access memory 12, and the RV3D module.

    [0111] In the embodiment described here: [0112] the models of the simple 3D synthetic objects are stored in the read only memory 13 of the immersion device HLM; and [0113] the models of the complex 3D synthetic objects are stored in a database BDOC of the server SRV.

    [0114] To simplify the description and the figures, a complex synthetic object is represented here by a cube. Those skilled in the art will understand that in practice a cube is a simple synthetic object within the meaning of the invention. A complex synthetic object can consist for example of a synthetic object representing a car.

    [0115] FIG. 5 represents the hardware architecture of the device DTOC for processing objects in one embodiment of the invention. This device includes a processor 21, a random access memory 22, a read only memory 23, a non-volatile memory 24 and a communication module 25 interconnected by a bus.

    [0116] The read only memory 23 constitutes a recording medium in accordance with the invention, readable by the processor 21 and on which a computer program PROG.sub.TOC in accordance with the invention is recorded, including instructions for the execution of the steps of the method P.sub.TOC for processing synthetic objects according to the invention and whose main steps will be described with reference to FIG. 4.

    [0117] The communication means 15 of the immersion device HLM and the communication means 25 of the device DTOC are configured to allow the immersion device HLM and the device DTOC to communicate with each other via the network NET.

    [0118] It is assumed that during a step E22, the user USR wishes to display a 3D synthetic object OS in the immersive scene.

    [0119] During a step E25, the immersion device HLM determines whether this synthetic object OS is simple or complex.

    [0120] If the synthetic object is simple, the result of the test E25 is negative and the 3D synthetic object is displayed by the RV3D module for displaying objects in three dimensions in accordance with the state of the art.

    [0121] If the synthetic object OS is complex, the result of the test E25 is positive.

    [0122] The processing of a complex synthetic object OC will now be described in one particular embodiment of the invention. This processing is implemented when the complex object OC must be displayed for the first time or recalculated, typically 72 times per second, to take into account the movements of the head of the user USR, or the manipulation of the synthetic objects by the user.

    [0123] It is assumed that the complex object OC must be displayed: [0124] at a position POS.sub.OC and according to an orientation OR.sub.OC in the immersive scene SI; [0125] with a size SZ.sub.OC.

    [0126] In the embodiment described here, each complex object is modeled at scale 1, and the display size SZ.sub.OC is a number defining a scale factor.

    [0127] During this step E30, the immersion device HLM sends a request RQ to the device DTOC for processing objects, this request including the position POS.sub.YAVT and orientation OR.sub.YAVT of the eye of the avatar AVT in the reference frame REP.sub.SI of the immersive scene SI and display parameters for the complex object OC in the immersive scene.

    [0128] In the embodiment described here, the display parameters for the complex object OC include the identifier ID.sub.OC of the complex object OC, its position POS.sub.OC and its orientation OR.sub.OC in the reference frame REP.sub.SI of the immersive scene SI and the size SZ.sub.OC of the complex object.

    [0129] This request is received by the device DTOC during a step F30.

    [0130] As represented with reference to FIG. 6, during a step F40, the device DTOC for processing objects defines a plane P perpendicular to the orientation OR.sub.YAVT and positioned between the eye of the avatar AVT and the complex object OC.

    [0131] In the embodiment described here: [0132] the plane P is located at a predetermined distance d.sub.AP from the position POS.sub.YAVT of the eye of the avatar, if this distance is smaller than the distance separating the eye of the avatar AVT and the complex object OC; or [0133] halfway between the eyes of the avatar AVT and the complex object OC otherwise.

    [0134] During this step F40, the device DTOC for processing objects performs a projection P.sub.OC of the complex object on the plane P, centered on the position POS.sub.YAVT of the eye of the avatar AVT.

    [0135] During this step F40, the device DTOC for processing objects obtains a two-dimensional image IMG whose contour corresponds to the smallest rectangle of the plane P in which the projection P.sub.OC can be inscribed.

    [0136] In the embodiment, the pixels of the two-dimensional image IMG which are not part of the projection P.sub.OC of the complex object, in other words the pixels of the background of the image, are transparent.

    [0137] This two-dimensional image IMG is represented in FIG. 7.

    [0138] During a step F50, and as represented in FIG. 8, the device DTOC for processing objects calculates, for each pixel p of the projection P.sub.OC, the distance d.sub.p between this pixel and the complex object OC, according to the direction defined by the position POS.sub.YAVT of the avatar AVT and this pixel p. For the pixels p of the image IMG which are not part of the projection P.sub.OC of the complex object, this distance d.sub.p is considered infinite.

    [0139] The device DTOC for processing objects constitutes a matrix of these distances d.sub.p for each pixel of the image IMG, called depth matrix MP.

    [0140] During a step F60, the device DTOC for processing objects sends, in response to the request RQ, to the immersion device HLM: [0141] the two-dimensional image IMG and the position POS.sub.IMG of the center of the image in the reference frame SI; and [0142] the depth matrix MP.

    [0143] It will be noted that the immersion device HLM knows the orientation of the two-dimensional image IMG since it is located in a plane P perpendicular to the orientation OR.sub.YAVT of the eye of the avatar AVT.

    [0144] The immersion device HLM receives this response during a step E60.

    [0145] FIG. 9 represents the objects of the immersive scene SI as present in the non-volatile memory 14 of the immersion device HLM.

    [0146] FIG. 10 represents the objects of the immersive scene with two avatars. It is understood that this scene comprises two two-dimensional images, each corresponding to the point of view of a user on the complex object OC.

    [0147] During a step E70, the frame HLM displays the image IMG in the immersive scene SI.

    [0148] In the embodiment described here, this display step takes into account the depth matrix MP.

    [0149] Thus, for each pixel p of the image IMG, the immersion device HLM determines a distance d.sub.YO corresponding to the sum of the distance d.sub.Yp between the eye of the avatar Y.sub.AVT and the pixel p and of the distance d.sub.p comprised in the depth matrix MP for this pixel. This distance corresponds to the distance that would separate the eye of the avatar AVT from the complex object OC if the latter were displayed in 3D in the immersive scene.

    [0150] The immersion device HLM then determines whether there is a pixel p of another synthetic object OO of the immersive scene SI located on the line defined by the eye of the avatar Y.sub.AVT and the pixel p, at a distance from the eye Y.sub.AVT smaller than the distance d.sub.YO. If this is the case, this means that there would be this other synthetic object (called occluding object) between the eye of the avatar and the complex object OC if the latter were displayed in 3D in the immersive scene.

    [0151] If the pixel p exists, the immersion device displays the pixel p of the occluding object OO, otherwise it displays the pixel p of the image IMG.

    [0152] FIG. 11 represents the immersive scene SI, the thumb of the hand HAVT of the avatar constituting an occluding object OO within the meaning of the invention.

    [0153] FIG. 12 represents the functional architecture of an immersion device in accordance with one particular embodiment of the invention. It includes: [0154] a module ME30 configured to send requests to a device for processing three-dimensional objects, a request including a position and an orientation of an eye of an avatar of a user in an immersive scene and display parameters for an object modeled in three dimensions in this scene; [0155] a module ME60 configured to receive a two-dimensional image and a position of this image in the scene, calculated by said device for processing three-dimensional objects, this image representing a point of view of an eye of the avatar AVT on the object; and [0156] a module ME70 for displaying this two-dimensional image in the immersive scene at the aforementioned position and by taking into account the orientation of the eye of the avatar.

    [0157] FIG. 13 represents the functional architecture of a device for processing objects configured to process at least one object and in accordance with one particular embodiment of the invention. It includes: [0158] a module MF30 for receiving, from a device for immersing a user in an immersive scene, a request including a position and an orientation of an eye of an avatar of the user in the scene immersive and display parameters for the 3D object; [0159] a module MF40 for determining a two-dimensional image and a position of this image in the scene, this image representing a point of view of the eye of the avatar on the object; [0160] a module MF60 for sending this two-dimensional image and said position of this image in the immersive scene to the immersion device.

    [0161] In the embodiment described here, the two-dimensional image determined by the module MF40 includes a projection of the object on a plane perpendicular to the orientation of the eye of the avatar and the device for processing objects includes a module MF50 to calculate, for at least one pixel of this projection, a distance between this pixel and the object, according to a direction defined by the position of the eye of the avatar and this pixel.