Method of generating volume hologram using point cloud and mesh
11733649 · 2023-08-22
Assignee
Inventors
Cpc classification
G03H1/0841
PHYSICS
G06T17/20
PHYSICS
G03H1/0808
PHYSICS
G03H1/0866
PHYSICS
H04N13/117
ELECTRICITY
G03H1/2294
PHYSICS
G03H2001/0088
PHYSICS
H04N13/282
ELECTRICITY
G06T19/00
PHYSICS
H04N13/271
ELECTRICITY
International classification
G03H1/08
PHYSICS
G03H1/02
PHYSICS
G06T17/20
PHYSICS
Abstract
Disclosed is a method of generating a volume hologram using a point cloud and a mesh, in which a weight is given to a brightness of a light source according to a direction of a light in order to record a hologram of better quality. The method includes: (a) acquiring multi-view depth and color images; (b) generating point cloud data of a three-dimensional object from the acquired multi-view depth and color images; (c) generating mesh data of the three-dimensional object from the point cloud data of the three-dimensional object; (d) calculating a normal vector of each mesh from the mesh data of the three-dimensional object; (e) extracting three-dimensional data at a user viewpoint from the mesh data of the three-dimensional object by using the normal vector of the mesh; and (f) generating hologram data from three-dimensional data at the user viewpoint.
Claims
1. A method of generating a volume hologram using a point cloud and a mesh, the method comprising: (a) acquiring multi-view depth and color images; (b) generating point cloud data of a three-dimensional object from the acquired multi-view depth and color images; (c) generating mesh data of the three-dimensional object from the point cloud data of the three-dimensional object; (d) calculating a normal vector of each mesh from the mesh data of the three-dimensional object; (e) extracting three-dimensional data at a user viewpoint from the mesh data of the three-dimensional object by using the normal vector of the mesh; and (f) generating hologram data from the three-dimensional data at the user viewpoint, wherein, in the step (f), the hologram data is generated by adjusting a brightness of each point of the object, and wherein the brightness of each point is adjusted by giving a weight to a brightness of a light source such that the weight is given in proportion to an absolute value of an inner product of a direction vector at the user viewpoint and the normal vector of the mesh of the point.
2. The method of claim 1, wherein, in the step (a), the multi-view depth and color images are images taken from all directions through an omnidirectional RGB-D camera system.
3. The method of claim 1, wherein, in the step (b), a matching process of unifying data of the multi-view depth and color images into one coordinate system is performed to generate the point cloud data of the three-dimensional object in one coordinate system.
4. The method of claim 1, wherein, in the step (c), the point cloud data of the three-dimensional object is sampled and converted into the mesh data of the three-dimensional object by using a Delaunay triangulation scheme.
5. A method of generating a volume hologram using a point cloud and a mesh, the method comprising: (a) acquiring multi-view depth and color images; (b) generating point cloud data of a three-dimensional object from the acquired multi-view depth and color images; (c) generating mesh data of the three-dimensional object from the point cloud data of the three-dimensional object; (d) calculating a normal vector of each mesh from the mesh data of the three-dimensional object; (e) extracting three-dimensional data at a user viewpoint from the mesh data of the three-dimensional object by using the normal vector of the mesh; and (f) generating hologram data from the three-dimensional data at the user viewpoint, wherein, in the step (e), a direction vector at the user viewpoint is acquired, an inner product of the direction vector at the user viewpoint and the normal vector of the mesh is calculated to extract only a mesh where a result value of the inner product is a non-negative number, and the extracted mesh serves as the three-dimensional data at the user viewpoint, and wherein the direction vector at the user viewpoint is acquired by setting a Z-axis direction as a basic direction vector, and rotating the basic direction vector by using a yaw, a pitch, and a roll at a rotated user viewpoint.
6. The method of claim 5, wherein the direction vector at the user viewpoint is calculated by the following formula 1:
{right arrow over (Z′)}=R(θ.sub.z)R(θ.sub.x)R(θ.sub.y){right arrow over (Z)}, wherein θ.sub.z, θ.sub.x, and θ.sub.y are a yaw, a pitch, and a roll at a user viewpoint, respectively, and
7. A method of generating a volume hologram using a point cloud and a mesh, the method comprising: (a) acquiring multi-view depth and color images; (b) generating point cloud data of a three-dimensional object from the acquired multi-view depth and color images; (c) generating mesh data of the three-dimensional object from the point cloud data of the three-dimensional object; (d) calculating a normal vector of each mesh from the mesh data of the three-dimensional object; (e) extracting three-dimensional data at a user viewpoint from the mesh data of the three-dimensional object by using the normal vector of the mesh; and (f) generating hologram data from the three-dimensional data at the user viewpoint, wherein, in the step (e), a direction vector at the user viewpoint is acquired, an inner product of the direction vector at the user viewpoint and the normal vector of the mesh is calculated to extract only a mesh where a result value of the inner product is a non-negative number, and the extracted mesh serves as the three-dimensional data at the user viewpoint, wherein, in the step (f), with respect to the extracted three-dimensional data at the user viewpoint, a density of the three-dimensional data is adjusted, and the hologram data is generated from the three-dimensional data at the user viewpoint in which the density is adjusted, wherein, with respect to mesh data corresponding to the extracted three-dimensional data at the user viewpoint, a size of each mesh is measured to determine whether the measured mesh size is less than a threshold value, the mesh is divided by adding a point to the mesh when the measured mesh size is greater than the threshold value, and the division process is repeatedly performed such that all meshes are divided to have a size less than the threshold value, thereby adjusting the density, and wherein, with respect to a mesh having a size greater than or equal to the threshold value, a location of the point to be generated is calculated by using a circumcenter of a triangle, color information of the generated point has an average value of each point, and a normal vector is set as a normal vector of the mesh, thereby adding the point.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION OF THE INVENTION
(9) Hereinafter, the present invention will be described in detail for implementation of the present invention with reference to the drawings.
(10) In addition, in describing the present invention, like elements are denoted by like reference numerals, and redundant descriptions thereof will be omitted.
(11) First, examples of the configuration of an entire system for implementing the present invention will be described with reference to
(12) As shown in
(13) Meanwhile, as another embodiment, the method of generating the volume hologram may be configured as one electronic circuit such as an application-specific integrated circuit (ASIC) so as to be implemented in addition to being configured as a program and operating on a general-purpose computer. Alternatively, a dedicated computer terminal 30 for exclusively processing only a process of generating a volume hologram from multi-view depth and color images may be developed. This will be referred to as a volume hologram generation system 40. Other possible forms may also be implemented.
(14) Meanwhile, the RGB-D camera system 20 may include a plurality of RGB-D cameras 21 for photographing an object 10 at mutually different viewpoints.
(15) In addition, each of the RGB-D cameras 21 may include a depth camera and a color camera (or an RGB camera). The depth camera refers to a camera for measuring a depth of the object 10, and the depth camera may output a depth video or image 61 by measuring depth information. The color camera refers to a conventional RGB camera, and the color camera may acquire a color video or image 62 of the object 10.
(16) A multi-view depth image 61 and a multi-view color image 62 photographed by the multi-view RGB-D camera 20 may be directly input to and stored in the computer terminal 30, and processed by the volume hologram generation system 40. Alternatively, the multi-view depth image 61 and the multi-view color image 62 may be pre-stored in a storage medium of the computer terminal 30, and the depth image 60 stored by the volume hologram generation system 40 may be read and input to the computer terminal 30.
(17) A video includes temporally consecutive frames. For example, when a frame at a current time t is set as a current frame, a frame at an immediately preceding time t−1 will be referred to as a previous frame, and a frame at t+1 will be referred to as a next frame. Meanwhile, each of the frames may have a color video (or a color image) and a depth video (or depth information).
(18) In particular, the object 10 may be photographed at the mutually different viewpoints a plurality of times corresponding to a number of the multi-view RGB-D cameras 20, and at a specific time t, the multi-view depth and color images 61 and 62 as many as the number of the cameras may be acquired.
(19) Meanwhile, the depth video 61 and the color video 62 may include temporally consecutive frames. One frame may have one image. In addition, each of the videos 61 and 62 may have one frame (or one image). In other words, each of the videos 61 and 62 may be configured as one image.
(20) Although generation of the hologram from the multi-view depth image and the multi-view color image means detection from each depth/color frame (or image), in the following, unless it is especially necessary for distinction, the terms “video” and “image” will be used interchangeably.
(21) Next, the method of generating the volume hologram using the point cloud and the mesh according to one embodiment of the present invention will be described with reference to
(22) A computer-generated hologram (CGH) may be obtained by acquiring fringe patterns for all light sources input from all coordinates of a hologram plane, and accumulating the patterns. Therefore, only a light source input to the hologram plane, that is, a specific viewpoint may be recorded in the hologram. The present invention proposes a method in which a user may recognize all viewpoints of an object through a hologram by using omnidirectional three-dimensional data based on an actual image.
(23) As shown in
(24) Next, in order to generate a hologram at a user viewpoint by using photographed mesh data, a viewing direction of a user may be found, and a normal vector of the mesh may be calculated (S40). Point cloud data at the user viewpoint may be extracted by selecting only a mesh corresponding to the user viewpoint by using the normal vector of the mesh (S50). Then, a density of all portions of a point cloud input to the CGH may be set to be uniform (S60). In addition, a hologram having the same quality at all viewpoints may be generated (S70). In this case, a complete complex hologram may be generated by adjusting a brightness of a light source by using a normal vector of each point.
(25) Hereinafter, each step will be described in detail.
(26) First, the desired object may be photographed from all directions through the omnidirectional RGB-D camera system to acquire the multi-view depth and color images (S10). Preferably, in order to generate a three-dimensional model based on an actual image, eight RGB-D cameras may be installed vertically in all directions by using a stand-type camera system. The depth and RGB images photographed by a plurality of RGB-D cameras installed as described above may be acquired.
(27) Next, point cloud data of a three-dimensional object may be generated by matching point cloud data from the photographed depth and color images (S20).
(28) As described in the previous example, the point cloud may be generated by using the depth and color images photographed by the eight cameras. In this case, a matching process of unifying data output from each of the cameras into one coordinate system may be performed. Through the above process, point cloud data of one three-dimensional object may be generated. In other words, the point cloud data of one three-dimensional object may be generated by matching the point cloud data of each viewpoint. In this case, the point cloud data of the three-dimensional object refers to data including three-dimensional location coordinates (x, y, z) and color values at corresponding coordinates.
(29) A point cloud refers to a set of points constituting a surface of a three-dimensional object. The data output from eight RGB-D cameras may be output as point clouds having mutually different coordinate systems (according to coordinate systems of the cameras). When the eight data are output into one configuration, it is difficult to recognize the configuration as one object. However, when the point cloud matching process is performed, the eight point clouds may be integrated into one coordinate system, and when the coordinate system is integrated, one object may be formed. In other words, one object may be formed through the process of matching the coordinate systems of the eight point clouds.
(30) Next, a three-dimensional model such as mesh data may be generated from the point cloud data of the three-dimensional object (S30).
(31) Before generating the three-dimensional model, the point cloud data of the three-dimensional object may be sampled. When the point cloud data is converted into a mesh, three point clouds may be connected to form one mesh. In this case, instead of connecting all points to form a mesh, the number of points may be reduced (sampled) so long as there is no lack of details. Thereafter, the points may be connected to generate the mesh.
(32) Preferably, the point cloud data of the three-dimensional object may be converted into the mesh data by applying a scheme such as Delaunay triangulation. Delaunay triangulation refers to a division scheme in which when a space is divided by connecting points in the space in triangles, a minimum value of an inner angle of the triangles becomes the maximum. In particular, the Delaunay triangulation scheme may be configured to perform the division such that any circumscribed circle of a triangle does not include any point other than three vertices of the triangle.
(33) Through the previous process, an omnidirectional three-dimensional model based on an actual image may be generated.
(34) Next, a normal vector of a three-dimensional mesh may be calculated (S40), and the point cloud data at the user viewpoint may be extracted by using the calculated normal vector of the three-dimensional mesh (S50).
(35) The user viewpoint may be determined in advance, or selected by the user or the like. In this case, before generating a digital hologram, a process of selecting data corresponding to the user viewpoint from omnidirectional three-dimensional data is required. To this end, the point cloud, the normal vector of the mesh, and information on a yaw, a pitch, and a roll corresponding to the user viewpoint are required.
(36) The user viewpoint is given by the information on the yaw, the pitch, and the roll.
(37) First, a normal vector calculation step S40 will be described.
(38) The normal vector of the mesh refers to a vector perpendicular to a surface of the mesh. The normal vector may be calculated through an outer product of two line segment vectors constituting the mesh. FIGS. 4A-4B are views showing a normal vector of a mesh.
(39) Referring to
(40) Next, a step S50 of extracting data at the user viewpoint will be described.
(41) When the object acquired by using the RGB-D camera has not been rotated arbitrarily, the object may be placed in a user front direction Z.sup..fwdarw.=(0, 0, 1). When the user viewpoint varies, a Z-direction vector may be rotated by using a yaw, a pitch, and a roll at a rotated viewpoint. A value of the rotated Z-direction vector may be a direction in which the user views the object. The yaw, the pitch, and the roll may be denoted by θ.sub.z, θ.sub.x, and θ.sub.y, respectively, and the rotated viewpoint R(θ.sub.z), R(θ.sub.x), R(θ.sub.y) may be expressed as follows.
(42)
(43) Therefore, a direction vector Z′, which is a viewing direction of the user, may be expressed as follows.
{right arrow over (Z′)}=R(θ.sub.z)R(θ.sub.x)R(θ.sub.y){right arrow over (Z)} [Mathematical formula 2]
(44) In this case, {right arrow over (Z)} denotes a (0, 0, 1) vector (or a z-direction vector), and {right arrow over (Z′)} denotes a direction vector in which the user views the object (hereinafter referred to as “direction vector Z′”).
(45) A scheme of acquiring data in a direction viewed by the user may be selected through an inner product of the direction vector Z′ (the direction vector at the user viewpoint) and the normal vector.
(46) When a result value of the inner product is a negative number, it means that there is an angle difference of 90 degrees or more, which may correspond a portion viewed by the user.
(47) The filtered meshes may be extracted as three-dimensional data at the user viewpoint. In other words, point cloud data of the selected mesh may be point cloud data at the user viewpoint. In the previous process, although the mesh data at the user viewpoint has been extracted, points on the mesh of the mesh data may be the point cloud data. In this context, the three-dimensional data at the user viewpoint refers to both the mesh data and the point cloud data. In other words, the three-dimensional data at the user viewpoint, the mesh data at the user viewpoint, and the point cloud data at the user viewpoint will be used interchangeably.
(48) Next, a density of the extracted point cloud data at the user viewpoint may be adjusted (S60).
(49) Portions of the point cloud acquired through the camera system may have mutually different densities. When CGH is acquired by using a portion having a low density due to a hole or the like, the portion may have quality lower than quality of a portion having a high density. Therefore, in order to obtain a hologram having uniform quality for all viewpoints, the density of the point cloud may be adjusted by using the mesh.
(50)
(51) As shown in
(52) In this case, the mesh may be used to calculate the density of the point cloud. Since the mesh is generated by using three closest points, when the size of the mesh is small, it means that point clouds in a corresponding portion are close to each other. In other words, it may be determined that a point cloud density of the portion is high. On the contrary, when the size of the mesh is large, it may be determined that the point cloud density of a corresponding portion is low.
(53) Therefore, a mesh having a size greater than or equal to the threshold value may be subject to a process of adding a point cloud. A scheme of adding a point may use a Voronoi diagram.
(54)
(55) Red points represent points that are previously acquired, and a black point represents a point added through the Voronoi diagram. This scheme may calculate a location of the point to be generated by using a circumcenter of a triangle. Color information of the generated point may have an average value of each point, and a normal vector may be set according to the normal vector of the mesh.
(56) When the point is added as described above, the point may be connected with existing points to divide the mesh into three meshes. When a size of the mesh obtained by the division is also greater than or equal to the threshold value, the above process may be repeated.
(57) Through the above process, the density of the point cloud used for the CGH may be uniformly maintained.
(58) Next, the hologram may be generated from the point cloud data in which the density is adjusted by adjusting a brightness of each point of the object (S70).
(59) The following mathematical formula may express a CGH formula of a real part for generating a complete complex hologram.
(60)
(61) In this case, I(u, v) denotes a brightness at coordinates u, v of a hologram plane, and the hologram plane may be a location of the user. In addition, A.sub.i(x, y, z) denotes a brightness of a light source in x, y, z coordinates.
(62) In this case, as the x, y, z coordinates becomes closer to the hologram plane, the brightness may become higher. However, since a direction in which a light is directed also affects the brightness, a weight has to be given to the brightness of the light source by using an angle between the hologram plane and the direction in which the light is directed in the x, y, z coordinates. The weight is denoted by W.sub.i(x, y, z) in Mathematical formula 3.
(63) Since the hologram is generated by selecting the point cloud or the mesh at the user viewpoint, a normal direction of the hologram plane will be the same direction as the user viewpoint. In addition, the direction in which the light is directed in the x, y, z coordinates may be the same as the normal direction of the coordinates. Therefore, the weight may be applied to the brightness by using the direction in which the user views the object (a.fwdarw.) and the normal vector of the mesh or the point cloud (b.fwdarw.) obtained in the above process. The weight may be an absolute value of an inner product of the vectors a.fwdarw. and b.fwdarw.. Preferably, the weight may be set in proportion to the absolute value of the inner product.
(64) The direction in which the user views the object (a.fwdarw.) may be the direction vector Z′ at the user viewpoint.
(65)
(66) In Mathematical formula 3, N denotes a number of light sources, λ denotes a wavelength of a reference wave used to generate a hologram, and p denotes a size of each of a hologram and a pixel of a light source, wherein the sizes are considered as the same value for convenience.
(67) In the present invention, a process of Mathematical formula 3 may be separated as in the following mathematical formulas 4 and 5. In other words, a fringe pattern may be generated for each object point by using Mathematical formula 4. The generated fringe pattern may be multiplied by a brightness value, and cumulative addition may be performed to generate a final hologram as in Mathematical formula 5.
(68)
(69) Although the present invention invented by the present inventor has been described in detail with reference to the above embodiments, the present invention is not limited to the embodiments, and various modifications are possible without departing from the gist of the present invention.