Method and apparatus for reconstructing three-dimensional image by using diffraction grating
11480917 · 2022-10-25
Assignee
Inventors
Cpc classification
G03H2001/0428
PHYSICS
G03H1/0406
PHYSICS
International classification
Abstract
A method of reconstructing a three-dimensional (3-D) image on the basis of a diffraction grating includes extracting parallax images from a raw image of an object photographed by using a diffraction grating and reconstructing a 3-D image from the extracted parallax image array by using a virtual pinhole model.
Claims
1. A method comprising: capturing, by using an imaging lens, parallax images diffracted by a diffraction grating; picking up the parallax images, captured by the imaging lens, onto a pickup plane; rearranging the parallax images, picked up onto the pickup plane, in a virtual image plane; and back-projecting the parallax images, rearranged in the virtual image plane, onto a reconstruction plane to reconstruct a 3-D image of a real object.
2. The method of claim 1, wherein the diffracted parallax images comprise a first parallax image including the real object and a second parallax image including a virtual object corresponding to the real object, and the second parallax image comprises an image obtained through diffraction which is performed with respect to the first parallax image, based on a wavelength of a ray emanated from the real object and a diffraction angle determined based on an aperture width of the diffraction grating.
3. The method of claim 1, wherein the rearranging of the parallax images comprises segmenting the parallax images picked up onto the pickup plane based on a geometrical relationship between the virtual image plane and a virtual pinhole defined in a same plane as the imaging lens and mapping the segmented parallax images to the virtual image plane based on a mapping factor Δx.sub.mapping, and the virtual pinhole is a virtual camera defined in the same plane as the imaging lens.
4. The method of claim 3, wherein, when the diffracted parallax images include a first parallax image including the real object and a second parallax image including the virtual object corresponding to the real object, the imaging lens is placed in an xy plane where a z-coordinate is 0 in a 3-D space capable of being expressed on an xyz axis, the pickup plane is placed in an xy plane where a z-coordinate is z.sub.I, the mapping factor Δx.sub.mapping is an amount of shift of an imaging point in an x-axis direction in the virtual image plane in a process of expressing the second parallax image as the imaging point in the pickup plane.
5. The method of claim 3, wherein, when the diffracted parallax images include a first parallax image including the real object and a second parallax image including the virtual object corresponding to the real object, the imaging lens is placed in an xy plane where a z-coordinate is 0 in a 3-D space capable of being expressed on an xyz axis, the pickup plane is placed in an xy plane where a z-coordinate is z.sub.I, the mapping factor Δx.sub.mapping is expressed as Equation below,
6. The method of claim 1, wherein the rearranging of the parallax images comprises respectively mapping the picked-up parallax images to minimum image areas defined in the virtual image plane, each of the minimum image areas is determined based on a field of view of each of virtual pinholes defined in the same plane as the imaging lens, and the virtual pinholes are virtual cameras defined in a same plane as the imaging lens.
7. The method of claim 6, wherein the picked-up parallax images are mapped to edges in the minimum image areas.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
DETAILED DESCRIPTION OF EMBODIMENTS
(12) Hereinafter, embodiments of the present invention will be described in detail with reference to the accompanying drawings. The advantages, features and aspects of the present invention will become apparent from the following description of the embodiments with reference to the accompanying drawings, which is set forth hereinafter. The present invention may, however, be embodied in different forms and should not be construed as limited to the embodiments set forth herein. Rather, these embodiments are provided so that this disclosure will be thorough and complete, and will fully convey the scope of the present invention to those skilled in the art.
(13) The terms used herein are for the purpose of describing particular embodiments only and are not intended to be limiting of example embodiments. As used herein, the singular forms “a,” “an” and “the” are intended to include the plural forms as well, unless the context clearly indicates otherwise. It will be further understood that the terms “comprises” and/or “comprising,” when used in this specification, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
(14) The present invention may provide a computational reconstruction method using a raw image obtained from a diffraction grating image.
(15) The computational reconstruction method according to the present invention may include an optical analysis method of localizing each parallax image in a raw image and a back-projection method of reconstructing a volume in an extracted parallax image.
(16)
(17) An optical structure using a diffraction grating according to the present invention, as illustrated in
(18) As illustrated in
(19) As illustrated in
(20) As illustrated in
(21) In order to overcome such problems, the present invention may propose a 3-D computational reconstruction method based on the theoretical analysis of geometrical information about parallax images in diffraction gating imaging.
(22)
(23) In
(24) Referring to
(25) The diffraction grating imaging system may include a diffraction grating 41 and an imaging lens 43 for performing the pickup process and may further include a computing device which performs a computing operation of reconstructing parallax images picked up onto a pickup plane through the diffraction grating 41 and the imaging lens 43 and includes the picked-up parallax images.
(26) The computing device may include a processor, a memory, a storage medium, a communication unit, and a system bus connecting the elements and may execute an algorithm associated with the pickup process and the reconstruction process among the elements. An element for performing various operations on the basis of the executed algorithm may be a processor.
(27) The processor may include one or more general-use microprocessors, digital signal processors (DSPs), hardware cores, application specific integrated circuits (ASICs), field programmable gate arrays (FPGAs), graphics processing units (GPUs), or an arbitrary combination thereof.
(28) The storage medium may temporarily or permanently store an algorithm executed by a processor and intermediate data (an intermediate value) and/or result data (a result value) obtained from various arithmetic operations performed based on the execution of the algorithm, or may store various known algorithms for performing the pickup process and the reconstruction process, or may provide an execution space for an algorithm.
(29) The storage medium may be referred to as a computer-readable medium, and the computer-readable medium may include random access memory (RAM) such as synchronous dynamic random access memory (SDRAM), read-only memory (ROM), non-volatile random access memory (NVRAM), electrically erasable programmable read-only memory (EEPROM), flash memory, and hard disk.
(30) In analysis of the pickup process performed on 3-D objects using a diffraction grating, system parameters such as a wavelength of a light source, a spatial resolution of the diffraction grating 41, and positions of the diffraction grating 41 and the imaging lens 43.
(31) The reconstruction process may be back-projection. Here, virtual pinholes may be defined for projecting parallax images onto a 3-D reconstruction space. A virtual pinhole VP may denote a virtual camera.
(32) In the diffraction grating imaging system according to an embodiment of the present invention, a maximum area where 3-D objects are in a picked-up image not to overlap may be defined as an effective object area EOA, and an operation of detecting an effective object area EOA from obtained images may be needed for reconstruction in diffraction grating imaging.
(33) Moreover, in the reconstruction process, a minimum image area MIA which is a view of minimum field of each virtual pinhole may be defined.
(34) Moreover, mapping between the effective object area EOA and a minimum image area MIA may be needed for computational reconstruction, and Equations relevant thereto will be described below.
(35) Pickup Process of 3-D Objects in Diffraction Grating Imaging
(36) The diffraction grating 41 may diffract a light ray emanated from, a 3-D object. An operation of observing an object through the diffraction grating 41 by using the imaging lens 43 may be a basic concept of a pickup process in diffraction grating imaging.
(37) Light ray diffracted by the diffraction grating 41 may be observed as perspective images of the 3-D object. In a case where an object space is observed through the diffraction grating 41, it may be considered that virtual images of an object are generated on a rear surface of the diffraction grating 41.
(38) The virtual images may have parallaxes of the 3-D object and may be stored as a parallax image array PIA by a capturing device such as the imaging lens 43. Here, an imaging depth and size of a virtual object may be the same as those of an original 3-D object.
(39)
(40) Referring to
(41) As illustrated in
(42) A diffraction angle (θ) between the +1.sup.st-order parallax image PI (x.sub.1st, y.sub.O, z.sub.O) and the 0.sup.th-order parallax image PI (x.sub.O, y.sub.O, z.sub.O) may be calculated as θ=sin.sup.−1(λ/a). Here, λ may denote a wavelength of a light source, and a may denote an aperture width.
(43) Considering a diffraction order and a position of a point object, an x-coordinate of a parallax image may be expressed as the following Equation 1.
(44)
(45) Here, m may be −1, 0, and 1. A y-coordinate (y.sub.O) may be obtained by replacing x.sub.O with y.sub.O in Equation 1. An imaging point I (x.sub.mth, y.sub.nth, z.sub.O) in a pickup plane 16 may be expressed as the following Equation 2.
(46)
(47) Here, m and n may be −1, 0, and 1, and z.sub.I may denote a z-coordinate of the pickup plane 16.
(48) The diffraction grating 41 may generate a parallax image in a space where a z-coordinate of an object and a parallax image are the same.
(49) It may be seen that a ray reaching I (x.sub.1th, y.sub.O, z.sub.O) in the pickup plane comes from PI (x.sub.1th, y.sub.O, z.sub.O), but only a ray emanated from a point object is real.
(50) Therefore, a viewpoint of each parallax image may be described based on a relationship between a real ray 23 from a point object P1 and virtual rays 21 and 22 from virtual point objects P2 and P3 passing through an optical center 20 of the imaging lens 43.
(51)
(52) Referring to
(53) In a point G(x.sub.mth, y.sub.nth, z.sub.O), the diffraction grating 41 may change a patch a real ray 23 from a point object P1 to the optical center 20 of the imaging lens 43. The point G(x.sub.mth, y.sub.nth, z.sub.O) may he expressed as the following Equation 3.
(54)
(55) Here, m and n may be −1, 0, and 1.
(56) Hereinafter, virtual pinholes for back-projection in diffraction grating imaging corresponding to a crucial part of a reconstruction process according to an embodiment of the present invention will be described.
(57) Virtual Pinholes and Mapping Position of Parallax Image
(58) Virtual pinholes may be regarded as a camera array. In order to implement a method of reconstructing a 3-D image, the present invention may provide formulas for the positions of the virtual pinholes corresponding to their parallax images and the mapping between parallax images and virtual pinholes.
(59) As shown in
(60) The imaging point I (x.sub.1st, y.sub.O, z.sub.O) is considered to be a parallax image having an angle Ø with respect to a ray emitted from a point object P1. When another point having a depth z.sub.I in a pickup plane is assumed, a line which has the same parallax as an image I (x.sub.1st, y.sub.O, z.sub.O) and passes through a point object and a point G (x.sub.1st, y.sub.O, z.sub.O) may meet a pickup plane.
(61)
(62) As shown in
(63)
(64) Here, m and n may be −1, 0, and 1.
(65) Equation 4 may denote that, when orders of corresponding parallax images are the same, a position of each virtual pinhole is a unique value regardless of a position of an object.
(66) However, Equation 4 may denote that a position of each virtual pinhole increases in an x direction and a y direction as a depth of an object increases, and the imaging lens 43 may be regarded as a 0.sup.th virtual pinhole.
(67) Hereinafter, a virtual image (VI) plane where a virtual image is provided will be described.
(68) Virtual images may be images where an image I (x.sub.mth, y.sub.nth, z.sub.O) of a pickup (PI) plane is rearranged in a virtual image plane. A position function of a virtual image may be regarded as VI (x.sub.mth, y.sub.nth, z.sub.O)=(x, y, z.sub.I). Here, m and n may be −1, 0, and 1.
(69)
(70) In Equation 5, by replacing x.sub.O and x.sub.mth with y.sub.O and y.sub.mth, a y-coordinate of a virtual image may be y.sub.O.
(71) A virtual image VI (x.sub.mth, y.sub.nth, z.sub.O) may be used for a computational reconstruction method according to the present invention.
(72) Such images may correspond to a shift version of a picked-up parallax image I (x.sub.mth, y.sub.nth, z.sub.O) having a mapping factor Δx.sub.mapping.
(73) By using Equations 2 and 5, the mapping factor Δx.sub.mapping representing the amount of shift between the parallax image I (x.sub.mth, y.sub.nth, z.sub.O) and a virtual image VI (x.sub.mth, y.sub.nth, z.sub.O) may be expressed as the following Equation 6.
(74)
(75) Parallax Image Segmentation and Reconstruction Method
(76) In order to segment a parallax image, a field of view of each virtual pinhole may be determined based on an image area.
(77) Unlike a raw image picked up by a lens array or a camera array, due to a transparent diffraction grating, it may be difficult to segment individual parallax images from a raw image captured by the diffraction grating 41.
(78) Moreover, in diffraction grating imaging, when a size of an object is greater than a specific limit, parallax images may overlap one another.
(79) A problem where parallax images overlap one another may be described based on a size of an object and a distance between the object and a diffraction grating.
(80) In order to solve the problem, an effective object area (EOA) which a maximum size of an object may be defined.
(81) For convenience, it may be assumed that a center of an effective object area (EOA) is aligned on an optical axis of the imaging lens 43.
(82)
(83) First, referring to
(84) In
(85) A boundary between picked-up PI regions may be determined based on Equation 2. Therefore, it may be possible to segment parallax images in a picked-up plane.
(86) By using Equation 6, a region of a picked-up parallax image of a pickup plane may be mapped to a minimum image area (MIA) of a virtual image (VI) plane, and then, half of the minimum image area may be obtained from a distance from an optical axis 60 of a virtual pinhole VP to an edge of the picked-up parallax image region mapped to the virtual image plane.
(87) When a point object is on the optical axis, a size of the effective object area (EOA) may be the same as a size of a ±1.sup.st PI region. A size Δx may depend on a point x.sub.OC of the optical axis. By the size Δx changing, a size of the 1.sup.st-order PI region may be changed with respect to a center (x.sub.OC1st)dmf of a 1.sup.st-order PI. A maximum size Δx (i.e., the effective object area (EOA)) may be expressed as the following Equation (7).
(88)
(89) Here, λ may denote a wavelength of a light source, and a may denote an aperture width.
(90) It may be seen in Equation 7 that the effective object area (EOA) (or a size of an EOA) is proportional to a distance (|z.sub.0−d|) between an object and a diffraction grating. Half of the minimum image area (MIA) Δr may be expressed as the following Equation 8 based on Equations 4, 5, and 7.
(91)
(92) In
(93) When an effective object area (EOA) is obtained based on Equation 7 described above in association with a circle object disposed in a left region of
(94) As shown in a right region of
(95)
(96) Referring to
(97) Moreover, in a minimum image area MIA, overlapping may occur like being emphasized by blue and orange in
(98) In a PIA generated by the computational reconstruction method according to the present invention, a position of a virtual pinhole and position information and size information about each element image may be defined well, and thus, a 3-D image may be reconstructed (restored) by a CIIR method known to those skilled in the art.
(99)
(100) Referring to
(101) Here, the diffracted parallax images may include a first parallax image PI (x.sub.0, y.sub.0, z.sub.0) including a real object P1 and a second parallax image (PI (x.sub.1st, y.sub.0, z.sub.0) and PI (x.sub.−1st, y.sub.0, z.sub.0)) including virtual objects P2 and P3 corresponding to the real object P1, and the second parallax image may be an image obtained through diffraction which is performed with respect to the first parallax image, based on a wavelength of a ray emanated from the real object and a diffraction angle determined based on an aperture width of the diffraction grating.
(102) Subsequently, in step S620, the parallax images picked up by the imaging lens may be picked up onto a pickup plane.
(103) Subsequently, in step S630, the parallax images picked up onto the pickup plane may be rearranged on (mapped to) a virtual image plane.
(104) Here, step S630 may be a process of segmenting the parallax images picked up onto the pickup plane on the basis of a geometrical relationship between the virtual image plane and a virtual pinhole defined in the same plane as the imaging lens and mapping the segmented parallax images to the virtual image plane on the basis of a mapping factor Δx.sub.mapping.
(105) The mapping factor may denote the amount of shift of the imaging point in an x-axis direction in the virtual image plane in a process of expressing a second parallax image as an imaging point in the pickup plane when the diffracted parallax images include a first parallax image including a real object and the second parallax image including the virtual object corresponding to the real object, the imaging lens is placed in an xy plane where a z-coordinate is 0 in a 3-D space capable of being expressed on an xyz axis, the pickup plane is placed in an xy plane where a z-coordinate is z.sub.I.
(106) The mapping factor may be calculated as expressed in Equation 6. In Equation 6, x.sub.0 may denote an x-coordinate of the imaging lens, x.sub.mth may denote an x-coordinate of a diffracted parallax image, z.sub.o may denote a z-coordinate of the real object, and d may denote a distance from the imaging lens to the diffraction grating.
(107) Moreover, step S630 may include a process of respectively mapping the picked-up parallax images to minimum image areas defined in the virtual image plane, and each of the minimum image areas may be determined based on a field of view of each of virtual pinholes defined in the same plane as the imaging lens. In this case, the picked-up parallax images may be mapped to edges in the minimum image areas.
(108) Subsequently, in step S640, parallax images rearranged in the virtual image plane may be back-projected onto a reconstruction plane, and thus, a 3-D image of the rea object may be reconstructed.
(109) According to the embodiments of the present invention, a parallax image array may be obtained based on a diffraction grating, and thus, comparing with a method using a camera array or a lens array, a parallax image array obtaining system which is low in cost and low in complexity may be implemented.
(110) Moreover, in a parallax image array generated according to the present invention, a position of a virtual pinhole and size information and position information about each element image may be well defined, and thus, a 3-D image may be reconstructed by using the CIIR method known to those skilled in the art.
(111) Moreover, a geometrical relationship between an effective object area (EOA), a picked-up PI region, a virtual pinhole, and a minimum image area (MIA) may be defined, thereby implementing a method of reconstructing a 3-D image by using a diffraction grating.
(112) A number of exemplary embodiments have been described above. Nevertheless, it will be understood that various modifications may be made. For example, suitable results may be achieved if the described techniques are performed in a different order and/or if components in a described system, architecture, device, or circuit are combined in a different manner and/or replaced or supplemented by other components or their equivalents. Accordingly, other implementations are within the scope of the following claims.