A METHOD FOR COMPUTING A HOLOGRAPHIC INTERFERENCE PATTERN
20230266709 · 2023-08-24
Inventors
Cpc classification
G03H2210/441
PHYSICS
G03H1/0808
PHYSICS
International classification
Abstract
The present disclosure relates to a method for computing a holographic interference pattern for a holographic plane including pixels of an illuminated three-dimensional, 3D, scene having object points representing one or more 3D objects. The method involves: determining, for a respective object point, a total light component contributed by one or more light sources in the 3D scene; and calculating, for a respective pixel, a complex-valued amplitude based on the total light component of non-occluded object points within a viewing cone of the pixel, thereby deriving the holographic interference pattern. The present disclosure further relates to a computer program product implementing the method, a computer-readable storage medium comprising the computer program product and a data processing system for carrying out the method.
Claims
1.-15. (canceled)
16. A method for computing a holographic interference pattern for a holographic plane comprising pixels of an illuminated three-dimensional, 3D, scene comprising object points representing one or more 3D objects, the method comprising: determining, for a respective object point, a total light component contributed by one or more light sources in the 3D scene; calculating, for a respective pixel, a complex-valued amplitude based on the total light component of non-occluded object points within a viewing cone of the pixel, thereby deriving the holographic interference pattern.
17. The method according to claim 16, wherein the determining comprises calculating an angle-dependent light component based on tracing direct rays from the object point towards the one or more light sources in the 3D scene.
18. The method according to claim 17, wherein the calculating the angle-dependent light component is further based on tracing indirect rays from the object point towards the one or more light sources in the 3D scene.
19. The method according to claim 17, wherein the tracing is performed within an acceptance cone with a point of origin at the object point and oriented towards the 3D scene.
20. The method according to claim 19, wherein the acceptance cone has a normal coinciding with a normal of a reflected copy of a viewing cone with a point of origin at the object point and oriented towards the holographic plane.
21. The method according to claim 20, wherein the calculating the angle-dependent light component is further based on tracing rays from the object point towards the holographic plane within the viewing cone.
22. The method according to claim 19, wherein the size of the acceptance cone is defined based on the size of the viewing cone of the pixel.
23. The method according to claim 19, wherein the viewing cone of the pixel is defined by the hologram wavelength and the spacing of the pixels in the holographic plane.
24. The method according to claim 16, wherein the determining further comprises calculating an angle-independent light component based on tracing direct rays from the object point towards the one or more light sources in the 3D scene.
25. The method according to claim 24, wherein the calculating the angle-independent light component is further based on tracing indirect rays from the object point towards one or more light sources in the 3D scene.
26. The method according to claim 16, wherein the one or more light sources comprises at least one area light source and/or at least one volumetric light source.
27. The method according to claim 16, wherein the object points are distributed over the surfaces of the one or more 3D objects and the number of the object points representing a respective surface is a function of the area of the surface, its orientation, its distance to the hologram plane and/or its material properties.
28. A computer program product comprising computer-executable instructions for performing the method according to claim 16 when the program is run on a computer.
29. A computer-readable storage medium comprising a computer program product according to claim 28.
30. A data processing system for carrying out the method according to claim 16.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0031] Some example embodiments will now be described with reference to the accompanying drawings.
[0032]
[0033]
[0034]
[0035]
[0036]
DETAILED DESCRIPTION OF EMBODIMENT(S)
[0037] The present disclosure relates to a method for generating a photo-realistic Computer-Generated Holography, CGH, content. Computer-Generated Holography, CGH, is the method of digitally computing a holographic image, i.e. a holographic interference pattern, and printing it onto a mask or film for subsequent illumination by a suitable coherent light source. The holographic image can then be brought to life by for example a holographic 3D display, a display that operates on the basis of interference of coherent light.
[0038] The holographic interference pattern may be derived based on the point source concept, according to which the objects within the scene are broken down in self-luminous object points. An elementary hologram is then calculated for every self-luminous object point and the final hologram is derived by superimposing all the elementary holograms. Point-source computer-generated holograms or point-source based holographic interference patterns may be derived by employing the Ray tracing method. Ray tracing essentially treats each object point as an individual light source or as a reflecting element illuminated by the lights beams or rays. Depending on the type of light sources illuminating the scene and the properties of the objects, different light components such as angle-dependent and an angle-independent light component are observed at the respective object points. The total light component at a respective object point is thus a sum of the angle-dependent and angle-independent light components. The angle-dependent light component comprises the light component contributed by specular lighting while angle-independent light component comprises light components contributed by diffuse and/or ambient lighting.
[0039] Specular lighting creates bright spots on objects based on the intensity of the specular lighting and the specular reflection constant of the object surface. The specular reflection light component thus consists of light reflected in a range of directions whose centre direction coincides with the reflected light. The specular reflection light component gives objects shine and highlights.
[0040] Diffuse lighting is the direct illumination of an object by an even amount of light interacting with its surface. After light strikes an object, it is reflected as a function of the surface properties of the object as well as the angle of the incoming light. The diffuse reflection light component thus consists of light scattered in all directions with a light intensity defined by the angle of incidence of the light. The diffuse reflection light component is the primary contributor to the object's brightness and forms the basis for its colour.
[0041] Ambient light is directionless, it interacts uniformly across all objects' surfaces, with an intensity determined by the strength of the ambient light sources and the properties of objects' surface, i.e. its materials. The ambient reflection light component consists of the sum of the light reflections from surrounding objects in the scene. Because the ambient light consists of rays traveling in various directions, its reflection is independent of the direction.
[0042] The method for computing the holographic interference pattern according to the present disclosure is based on the point-source concept in which the total light component is derived based on raytracing. The method will be now described with reference to the
[0043]
[0044] The computation of the hologram image is based on the laws of diffraction given by the Huygens-Fresnel principle which expresses how to calculate the complex-valued amplitude of any point p on a holographic plane H, given a collection of surfaces S, integrating ∀x∈S. Herein, a generalization of the Huygens-Fresnel principle is applied, defined as
H(p)=∫∫.sub.SA(x,p)exp(ik∥p−x∥)n.Math.p−x
dx (1)
where n is the surface normal of S at object point x;
is the wavenumber with λ being the wavelength of the light and i the imaginary unit, and, where ∥.Math.∥ is the Euclidean norm and .Math.
is the normalization operator so that
[0045] To numerically evaluate this integral, the expression in Equation (1) is discretized. To do so, the objects within the 3D scene, i.e. the original image, and the holographic plane are respectively subdivided, i.e. quantized, into points and pixels.
[0046] For this purpose, in a first step 301 of the method, the holographic plane 200 is sampled on a regular grid to obtain pixels p, equispaced by a distance p called the pixel pitch, representing the hologram pixels. In a second step 302, object points x representing the objects in the 3D scene are defined by sampling the surfaces S of the objects 10, 20, and 30. A discrete set of object points containing #
[0047] After quantization of the objects and the holographic plane, the method proceeds to the computation of the holographic image or a holographic interference pattern. The goal is to compute the Point Spread Function, PSF, modulation function A which, may be defined as
A(x,p)=B(x,p).Math.Φ(x) (3)
, where Φ:.fwdarw.T is a random phase function so that Φ(x)=exp(iφ(x)) and φ(x)∈
(0,2π), i.e. the uniform distribution between 0 and 2π. Here, B:
.sup.2.fwdarw.
will define the amount of light emitted from an object point x to a pixel p in the holographic plane, which is equivalent to the Bidirectional Reflectance Distribution Function, BRDF, definition. Note, that, B:
.sup.2.fwdarw.
and more complex phase distributions for Φ can be chosen to model even more kinds of light interactions and phenomena.
[0048] Conventionally, the BRDF function B for all pairs of points {x, p} is directly computed. According to the present disclosure, however, the calculation of the BRDF function and therefore the calculation of the holographic image is performed in two phases. In the first phase, for a respective object point, the total light component contributed by the light sources in the scene is computed, and in the second phase, the complex-valued amplitude, for a respective holographic pixel, based on the total light component is computed.
[0049] In other words, in the first phase, i.e. step 300, a simplified representation of B is computed for every object point x∈
[0050] By performing the computation in two phases, complex effects, such as occlusions and aliasing considerations may be taken into account. For example, this can be achieved by setting the value of the BRDF function for a pair of a holographic pixel and an object point to zero, e.g. B(x.sub.0, p.sub.0)=0 for the pair x.sub.0, p.sub.0, whose associated rays are occluded or for p.sub.0 that lie outside of the viewing cone originating from x.sub.0.
[0051] To calculate the total light component in step 300, the respective object points x are characterized by a material. The material may be described by a number of parameters according to a material characterization model. An example material characterization model is the Phong model according to which a respective object point is characterized by a diffuse reflection constant K.sub.d(x), a specular reflection constant K.sub.s(x), an ambient reflection constant K.sub.a(x) and a shininess constant α(x). In other words, according to the Phong model, a respective object point is represented by the total light component being the sum of the diffuse, the specular, and the ambient light components.
[0052] According to an example embodiment, instead of employing a classic Phong model, a modified Phong is employed according to which the ambient light component is replaced by global illumination. Global illumination models how light bounces off of surfaces onto other surfaces, i.e. the indirect light illumination, rather than being limited to just the light that hits a surface directly from a light source, i.e. the direct light illumination.
[0053] To account for occlusion culling, only objects' surfaces with normals n who satisfy the condition
are taken into consideration for sampling the object surface points. u is the hologram normal 40 pointing to the scene 100, and θ.sub.max is the maximum diffraction angle 41 determined by the Nyquist rate ν.sub.max which depends on the pixel pitch ρ. In other words, surfaces not visible from the holographic plane are not taken into account.
[0054] Next, the angle-dependent and angle-independent light components for the respective object points are derived. This is performed in method steps 310 and 320 which may be performed sequentially or in parallel. The two light components are computed for the respective object pixels by taking into account both direct and indirect rays as follows.
[0055] For a respective object point, two sets of light rays, i.e. one set of direct rays and another set of indirect rays, are traced to obtain the BRDF B for that object point. In this example, a strict definition for direct and indirect lighting is employed according to which direct illumination is the light going straight from the light source to the respective object points, while the light from the rest of the scene is considered indirect. Indirect lighting will thus also include light rays reflected once from another object surface in the scene.
[0056] The first set of rays, i.e. the set of direct rays, is traced from the respective object point towards the one or more light sources. For the area light sources 111 and 112, multiple light ray samples are taken per object point. This can be done by subdividing the light source area into equal segments and tracing one ray to a random position within each segment. As shown in the example of
[0057] The second set of rays, i.e. the set of indirect rays, will uniformly and randomly sample the hemisphere on the object surface S to obtain information on the global illumination. As shown in the example of
[0058] The set of all traced direct and indirect light rays per object point x may be denoted as L(x). Further, L(x) may be defined to have a constant predetermined size #L (x)=n.sub.L. Alternatively, the number of direct and/or indirect rays of the respective sets may be different for respective object points. The number of rays within the respective sets depends on the scene complexity and the desired quality of the holographic image.
[0059] The BRDF B(x, p) for a respective object point is thus derived based on tracing these two sets of rays. The BRDF B(x, p) is defined by the sum of two light components, i.e. an angle-independent light component B.sub.d(x) representing diffuse lighting and an angle-dependent light component B.sub.s(x, p) representing specular lighting.
[0060] The angle-independent light component B.sub.d(x) is a constant term representing the diffuse light emission strength in all directions and is calculated 320 as follows.
[0061] For every traced ray , whose ∥
∥ is proportional to the light intensity, the diffuse term will be accumulated as follows
B.sub.d(x)=Σ.sub.∈L(x)max(0,K.sub.d(x).Math.(
.Math.n)) (5)
[0062] In other words, all rays whether direct or indirect will be considered in the computation of the angle-independent light component as expressed in Equation (5).
[0063] After the computation of the angle-independent component or in parallel with it, the angle-dependent light component B.sub.s(x) is calculated 310. The angle-dependent component is computed based on a subset bundle of light vectors L′(x).Math.L(x). Generally L′(x) L(x), because in practice many of the traced rays will have no noticeable effect on B.sub.s because of various reasons: their incidence angle, low light intensity, small specular reflection constant K.sub.s or shininess constant α. For this reason, these may be omitted from the bundle to save calculation time. Thus, according to an embodiment, a ray will only be added to the bundle L′(x) if its maximum effect on the holographic image surpasses a certain threshold T, which can be chosen depending on the desired quality of the holographic image. This gives rise to the concept of an acceptance cone, as rays outside of that cone will have a contribution smaller than the threshold T for a given light intensity ∥l∥. The threshold thus defines a maximum angle, which in a three-dimensional view is represented by the acceptance cone, at which the contribution of the rays satisfies the above requirement. Given the halfway vector h=
−u
, the cosine of the angle cos(θ.sub.max) with the maximum specular light strength c visible from the holographic plane will be
[0064] Herein, to simplify the notation, the notion of a halfway vector defined by the Blinn-Phong model is used.
[0065] Thus, the light vector will only be added to the bundle L′(x) if the following equation is satisfied:
K.sub.s(x).Math.∥∥.Math.c.sup.α(x)>T (7)
[0066] In other words, considering the object point 100.sub.1, the light rays will only be added if they fall within the acceptance cone 110 with a point of origin the object point, as shown by the light rays illustrated with bold solid lines in
[0067] Once all points x with their associated B.sub.d and bundles L′(x) have been calculated, the method proceeds to step 350 to compute the complex-valued amplitudes of the respective pixels in the holographic plane 200. The computation of the complex-valued amplitudes is done based on rays satisfying the following two conditions. Firstly, rays from an object point x will only be traced to a hologram pixel p if the incidence angle does not surpass the maximum diffraction angle θ.sub.max. This is represented by a viewing cone with a point of origin the respective holographic pixel and an angle of inclination being the maximum diffraction angle θ.sub.max. This viewing cone is referred as to a viewing cone of the holographic pixel. Thus, rays from respective object points x will only be traced to a holographic pixel if these rays fall within the viewing cone of the pixel, i.e. their incidence angle does not exceed the θ.sub.max. This assures that the light components of the object points seen from a respective pixel are taken into account in the computation of the complex-valued amplitude of that pixel. This relation may be expressed the other way around; rays from an object point x will only be traced to respective holographic pixels if these rays fall within the viewing cone of the object point, i.e. a viewing cone 210 with a point of origin the respective object point and oriented towards the holographic plane as shown for example in
[0068] And, secondly, for every pixel p, a visibility ray is traced to x to see whether it is occluded. If the object point is not occluded, the complex-valued amplitude of the hologram pixel H(p) is incremented by angle-dependent light component
B.sub.s(x,p)=B(x,p)−B.sub.d(x)=.sub.∈L′(x)K.sub.s(x).Math.∥
∥.Math.(h(p).Math.n).sup.α(x). (8)
[0069] This way only the total light components of the respective non-occluded object points are considered in the computation of the complex-valued amplitude of the respective pixels. The expression in Equation (8) is then used combined with Equation (1) and (3) to compute the complex-valued amplitude of the respective pixel.
[0070] The B.sub.s(x, p) and B.sub.d(x) terms are combined to derive the total B(x, p) term which is then used to calculate the PSF for every holographic pixel. Using equations (1) and (3), we get the expression
B(x.sub.0,p).Math.Φ(x.sub.0).Math.exp(ik∥p−x.sub.0∥)n.Math.(p−x.sub.0
, (9)
evaluating the complex-valued amplitude of a single PSF for a single object point x.sub.0. This process is repeated and summed over for all object points x to obtain the final computer-generated hologram.
[0071]
[0072] As used in this application, the term “circuitry” may refer to one or more or all of the following:
[0073] (a) hardware-only circuit implementations such as implementations in only analog and/or digital circuitry and
[0074] (b) combinations of hardware circuits and software, such as (as applicable): [0075] (i) a combination of analog and/or digital hardware circuit(s) with software/firmware and [0076] (ii) any portions of hardware processor(s) with software (including digital signal processor(s)), software, and memory(ies) that work together to cause an apparatus, such as a mobile phone or server, to perform various functions) and
[0077] (c) hardware circuit(s) and/or processor(s), such as microprocessor(s) or a portion of a microprocessor(s), that requires software (e.g. firmware) for operation, but the software may not be present when it is not needed for operation.
[0078] This definition of circuitry applies to all uses of this term in this application, including in any claims. As a further example, as used in this application, the term circuitry also covers an implementation of merely a hardware circuit or processor (or multiple processors) or portion of a hardware circuit or processor and its (or their) accompanying software and/or firmware. The term circuitry also covers, for example and if applicable to the particular claim element, a baseband integrated circuit or processor integrated circuit for a mobile device or a similar integrated circuit in a server, a cellular network device, or other computing or network device.
[0079] Although the present invention has been illustrated by reference to specific embodiments, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied with various changes and modifications without departing from the scope thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the scope of the claims are therefore intended to be embraced therein.
[0080] It will furthermore be understood by the reader of this patent application that the words “comprising” or “comprise” do not exclude other elements or steps, that the words “a” or “an” do not exclude a plurality, and that a single element, such as a computer system, a processor, or another integrated unit may fulfil the functions of several means recited in the claims. Any reference signs in the claims shall not be construed as limiting the respective claims concerned. The terms “first”, “second”, third”, “a”, “b”, “c”, and the like, when used in the description or in the claims are introduced to distinguish between similar elements or steps and are not necessarily describing a sequential or chronological order. Similarly, the terms “top”, “bottom”, “over”, “under”, and the like are introduced for descriptive purposes and not necessarily to denote relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and embodiments of the invention are capable of operating according to the present invention in other sequences, or in orientations different from the one(s) described or illustrated above.