Method for texturing a 3D model

11302073 · 2022-04-12

Assignee

Inventors

Cpc classification

International classification

Abstract

Method for texturing a 3D model of at least one scene (5), comprising: a) the meshing with surface elements (50; 55) of a point cloud (45) representing the scene, so as to generate the 3D model, each surface element representing an area of the scene, b) the unfolding of the 3D model for obtaining a 2D model formed of a plane mesh (60a; 60b) formed of polygons (65), each surface element corresponding to a single polygon, and vice versa, and c) for at least one, preferably all the surface elements, iv) the identification, from an image bank (40a; 40b), of the images representing the area of the scene and which have been acquired by a camera the image plane (72a-b) of which has a normal direction, in the corresponding acquisition position, forming an angle (θ.sub.a-b) less than 10°, preferably less than 5°, better less than 3° with a direction normal (70) to the face of the surface element, v) the selection of an image (40a-b) from the identified images, and, vi) the association of a texture property with a corresponding polygon (65), from a piece of information of a pixel (80; 85) of the selected image which is superimposed on the surface element (55), so as to produce a textured 2D model, and d) the production of the textured 3D model by matching the 3D model and the textured 2D model.

Claims

1. A method for texturing a 3D model of at least one scene, the method comprising: a) meshing with surface elements of a point cloud representing the scene to generate the 3D model, each surface element representing an area of the scene, b) unfolding the 3D model for obtaining a 2D model formed of a plane mesh formed of polygons, each surface element corresponding to a single polygon, and vice versa, and c) for at least one of the surface elements, i) identifying, from an image bank, of the images representing the area of the scene and which have been acquired by a camera the image plane of which has a normal direction, in the corresponding acquisition position, forming an angle (θ.sub.a-b) less than 10° with a direction normal to the face of the surface element, ii) selecting of an image from the identified images, and, iii) associating a texture property with a corresponding polygon, from a piece of information of a pixel of the selected image which is superimposed on the surface element, to produce a textured 2D model, and d) producing the textured 3D model by matching the 3D model and the textured 2D model, wherein the method further comprises: a step i′) between steps i) and ii), wherein sharp images are identified from among the images determined in step i), and a step i″), which is intermediate between steps i′) and ii), wherein the image is identified taken by the camera the image plane of which is the closest to the surface element.

2. The method according to claim 1, comprising a step prior to step a), wherein the scene is discretized in the form of the point cloud, from an image bank identical to the image bank used in step i) comprising images each representing a portion of the scene seen by at least one camera and wherein at least two images acquired at different positions represent portions of the scene that overlap.

3. The method according to claim 2, wherein the rate of overlap between said two images acquired at different positions is greater than 70%.

4. The method according to claim 1, wherein in step i), the image is selected whereof the image plane of the camera has a normal direction, in the corresponding acquisition position, making the smallest angle, with a direction normal to the face of the surface element.

5. The method according to claim 1, wherein prior to step a), the image bank implemented in step i) is generated by moving, in a plurality of acquisition locations, a scanning device comprising at least one camera, and, at each acquisition location, a portion of the scene seen by the scanning device is acquired by taking at least one image by way of the camera, the acquisition locations being selected such that the portions of the scene seen by the scanning device in two respective consecutive acquisition locations at least partially overlap.

6. The method according to claim 1, wherein more than 50 pixels per mm.sup.2 are at least partially superimposed inside at least one polygon, on the basis of the area of the polygon.

7. The method according to claim 1, wherein a visual representation of the textured 3D digital model of a scene is displayed on a screen of a computer or an augmented reality headset.

8. The method according to claim 1, wherein the images of the image bank comprise metadata comprising, for each pixel of an image, at least one property of the material constituting the portion of the scene imaged by the pixel.

9. The method according to claim 8, wherein the property of the material is a thermal property and/or an electrical property and/or a mechanical property and/or a magnetic property and/or a radiographic property.

10. The method according to claim 1, wherein prior to step a), the discretization of the scene is obtained by photogrammetric processing of images in the image bank.

11. The method according to claim 1, wherein the mesh formed in step a) comprises quadrangular elements and/or triangular surface elements.

12. The method according to claim 1, wherein each polygon of the 2D model having a shape and dimensions identical to the surface element corresponding thereto in the 3D model.

13. The method according to claim 1, wherein the texture property being the color of the pixel or a piece of metadata associated with the pixel.

14. The method according to claim 1, wherein the angle (θ.sub.a-b) is formed to be less than 5°.

15. The method according to claim 1, wherein the angle (θ.sub.a-b) is formed to be less than 3°.

16. The method according to claim 1, wherein step c) is performed for all of the surface elements.

17. A computer for texturing a 3D model of at least one scene, the computer comprising: a. at least one storage which records an image bank and associated metadata, and instructions for texturing the 3D model of the at least one scene, and b. a processor for executing the instructions by being configured to: a) mesh with surface elements of a point cloud representing the scene to generate the 3D model, each surface element representing an area of the scene, b) unfold the 3D model for obtaining a 2D model formed of a plane mesh formed of polygons, each surface element corresponding to a single polygon, and vice versa, and c) for at least one of the surface elements, i) identify, from the image bank, of the images representing the area of the scene and which have been acquired by a camera the image plane of which has a normal direction, in the corresponding acquisition position, forming an angle (θ.sub.a-b) less than 10° with a direction normal to the face of the surface element, ii) select of an image from the identified images, and, iii) associate a texture property with a corresponding polygon, from a piece of information of a pixel of the selected image which is superimposed on the surface element, to produce a textured 2D model, and d) produce the textured 3D model by matching the 3D model and the textured 2D model, wherein the processor is further configured to: between i) and ii), i′) identify sharp images from among the images determined in i), and intermediate between i′) and ii), i″) identify the image taken by the camera, the image plane of which is the closest to the surface element.

18. A non-transitory computer readable medium having stored thereon a program that when executed by a computer causes the computer to implement a method for texturing a 3D model of at least one scene, the method comprising: a) meshing with surface elements of a point cloud representing the scene to generate the 3D model, each surface element representing an area of the scene, b) unfolding the 3D model for obtaining a 2D model formed of a plane mesh formed of polygons, each surface element corresponding to a single polygon, and vice versa, and c) for at least one of the surface elements, i) identifying, from an image bank, of the images representing the area of the scene and which have been acquired by a camera the image plane of which has a normal direction, in the corresponding acquisition position, forming an angle (θ.sub.a-b) less than 10° with a direction normal to the face of the surface element, ii) selecting of an image from the identified images, and, iii) associating a texture property with a corresponding polygon, from a piece of information of a pixel of the selected image which is superimposed on the surface element, to produce a textured 2D model, and d) producing the textured 3D model by matching the 3D model and the textured 2D model, wherein the method further comprises: a step i′) between steps i) and ii), wherein sharp images are identified from among the images determined in step i), and a step i″), which is intermediate between steps i′) and ii), wherein the image is identified taken by the camera the image plane of which is the closest to the surface element.

Description

(1) The invention may be better understood on reading the following detailed description of non-restrictive implementations thereof, and on examining the appended drawing, in which:

(2) FIG. 1 represents a scene of an environment to be modeled,

(3) FIG. 2 illustrates an acquisition of the scene.

(4) FIG. 3 represents a point cloud representing an object in the scene,

(5) FIG. 4 represents the meshed object of the modeled scene,

(6) FIGS. 5a and 5b represent a 2D model obtained by a semi-automatic unfolding and by an automatic unfolding by means of the Blender software respectively,

(7) FIG. 6 illustrates the steps of identifying images and selecting an image for a surface element of the meshed object,

(8) FIG. 7 illustrates step iii) of the association of a texture property with a polygon of the 2D mesh from the pixel information of the selected image,

(9) FIG. 8 represents the textured 2D model obtained from the 2D model of FIG. 5a, and

(10) FIG. 9 is an image representing the display of the textured 3D model produced from the textured 2D model of FIG. 8.

(11) FIG. 1 represents a scene 5 of an environment 10. For illustrating the implementation of the method, interest will be more specifically focused hereafter on an object 15 included in the scene. In this case, the object is a vessel which has a curved outer envelope 20.

(12) An image acquisition device 25 comprising a camera 30 is arranged in the environment in a plurality of acquisition locations 35a-c. The camera acquires at least one image 40a-c of the scene in each acquisition location. The acquisition locations are selected so that the acquired images overlap in at least two locations.

(13) An image bank is thus formed, comprising the set of images of the scene acquired by the image acquisition device.

(14) A point cloud 45 is then generated by photogrammetric processing of the images by means of the Photoscan software, then is meshed by means of surface elements 50 of triangular shape. Thus, a meshed 3D model of the scene is obtained, which is illustrated in FIG. 4.

(15) Thereafter, the texturing of the scene is implemented.

(16) The 3D model is then unfolded to obtain a 2D model 60a formed of a plane mesh comprising two connected 2D meshed surfaces 62a and 62b, as illustrated in FIG. 5a. The surface 62b corresponds to the lateral face of the vessel in the plane and the surface 62a corresponds to the bottom of the vessel unfolded in the plane. The semi-automatic unfolding illustrated in FIG. 5a is achieved by minimizing the number of 2D surfaces.

(17) As is clearly apparent, the semi-automatic unfolding illustrated in FIG. 5a, in comparison with the automatic unfolding illustrated in FIG. 5b, in addition to the fact that it minimizes the number of 2D surfaces, better preserves the connectivity of the 3D mesh. FIG. 5b illustrates a 2D model 60b obtained by automatic unfolding by means of the Blender software product. This results in a greater number of surfaces, and notably non-connected surfaces. It is also more difficult for the operator to visually associate in the 2D model 60b, the polygon corresponding to a surface element of the 3D model.

(18) A texture is then associated with the 2D model.

(19) For illustrative purposes, the focus of interest here is on one 55 of the surface elements modeling the object. The detailed processing below is implemented for each surface element discretizing the scene.

(20) The normal 70 of the surface element, external to the object, is calculated from the position of the vertex points of the element. The images of the bank are then identified which represent the area of the scene delimited in the 3D model by the surface element, which are in the example illustrated in FIG. 6, the images 40a and 40b.

(21) From the images 40a and 40b, the image or images are identified which have been acquired by a camera the image plane 72a-b of which has a normal direction 74a-b, in the corresponding acquisition position, forming an angle θ.sub.a-b less than 10° with a direction normal to the face of the surface element.

(22) In the example illustrated in FIG. 6, the two images 40a and 40b satisfy the condition mentioned in the preceding paragraph.

(23) Each of these images may be selected and be superimposed with the surface element in step ii) of the method. To improve the quality of the texturing, since the images 40a and 40b are sharp, the image is identified the image plane of which is the closest to the surface element. In this case, in FIG. 6, it is the image referenced 40b, which contains the most details of the area of the scene delimited in the 3D model by the surface element.

(24) The image 40b is therefore selected for texturing the surface element 55. To this end, the texturing is first performed on the corresponding polygon 65 of the 2D model. To do this, the image 40b is projected onto the 2D model so as to assign to the polygon, a texture property from the pixel information of the image 40b superimposed on the corresponding surface element.

(25) When a pixel 80 of the image is fully superimposed on the polygon, the texture property is defined associated with the area having the dimensions of the pixel by associating thereto the color of the corresponding pixel.

(26) In the case where a pixel of the image is partially superimposed on the surface element, as is the case, for example, of the pixel referenced 85 in FIG. 7, the texture property of the pixel is assigned to the area having the form of the portion of the pixel totally covered by the polygon of the 2D model.

(27) Thus, each polygon has a texture faithfully reproducing the details of the area of the scene that it models.

(28) The sequence of steps i) to iii) described above is preferably performed for all polygons in the 2D model corresponding to the different surface elements of the 3D model. Thus, a 2D textured mesh is obtained.

(29) The 3D model may then be textured by assigning, univocally, to each surface element, the texture or textures associated with the corresponding polygon, as illustrated in FIG. 9.

(30) In particular, the storage of the texture properties of each surface element is facilitated, since the positions of the polygons are known. It is therefore not necessary to store the textures in a 3D format in the memory of a computer implementing the method. This thus restricts the use of powerful processing means, such as a graphics card of a computer, for displaying the textured model.

(31) Of course, the invention is not limited to the implementations described and to the example given above, described for illustrative purposes.