METHOD FOR HOLE FILLING IN 3D MODEL, AND RECORDING MEDIUM AND APPARATUS FOR PERFORMING THE SAME
20170263057 · 2017-09-14
Inventors
Cpc classification
H04N5/2226
ELECTRICITY
G06T19/20
PHYSICS
International classification
G06T19/20
PHYSICS
G06T19/00
PHYSICS
Abstract
Disclosed is a method for hole-filling in 3D models. The method includes extracting static background information from a current frame of an input image and extracting virtual static background information using the static background information, warping a color image and a depth map of the current frame to acquire a virtual image and a virtual depth map, and labeling a hole area formed in the virtual depth map to extract local background information, performing a first hole-filling onto the virtual image and the virtual depth map using a similarity between the virtual static background information and the local background information, and performing a second hole-filling with respect to remaining holes after the first hole-filling in a manner of an exemplar-based in-painting method to which a priority function including a depth term is applied.
Claims
1. A method for hole-filing in three (3) dimensional models, the method comprising: extracting static background information from a current frame of an input image and extracting virtual static background information using the static background information; warping a color image and a depth map of the current frame to acquire a virtual image and a virtual depth map, and labeling a hole area formed in the virtual depth map to extract local background information; performing a first hole-filling onto the virtual image and the virtual depth map using a similarity between the virtual static background information and the local background information; and performing a second hole-filling with respect to remaining holes after the first hole-filling in a manner of an exemplar-based in-painting method to which a priority function including a depth term is applied.
2. The method of claim 1, wherein a step of the extracting static background information comprises: generating a codebook with respect to each frame of the input image using the color image and the depth map information corresponding to the color image; classifying codewords of the codebook into a background codeword and a foreground codeword; and extracting a background codebook for a background area of the input image.
3. The method of claim 2, wherein a step of the extracting the static background information comprises: extracting a background image from the current frame of the input image and a background depth map corresponding to the background image as temporary background information using the background codebook; and extracting the static background information including a static background image and a static depth map of the current frame by updating the temporary background information using background information of preceding frames.
4. The method of claim 3, wherein a step of the extracting the static background information comprises: extracting the virtual static background information including a virtual static background image and a virtual static background depth map corresponding to the virtual static background image by warping the static background image and the static depth map.
5. The method of claim 1, wherein the labeling of a hole area formed in the virtual depth map to extract local background information comprises: extracting the local background information by labeling a rectangular area including the hole area formed in the virtual depth map, obtaining a local background depth value by identifying a background pixel and a foreground pixel according to depth information of the labeled rectangular area, and filling an area identified as the foreground pixel of the rectangular area with the local background depth value.
6. The method of claim 1, wherein a step of the performing a first hole-filling comprises: updating the virtual image and the virtual depth map using the virtual static background information corresponding to a pixel positioned within a predetermined range from a pixel corresponding to the local background information.
7. The method of claim 6, wherein a step of the performing a second hole-filling comprises: labeling the remaining hole pixels in the updated virtual image and virtual depth map and performing the second hole-filling in the manner of the exemplar-based in-painting method while moving from a smallest hole area to a largest hole area according to a result of the labeling.
8. A non-transitory computer-readable recording medium having recorded thereon a computer program for performing a method for hole-filing in 3D models, the method comprising: extracting static background information from a current frame of an input image and extracting virtual static background information using the static background information; warping a color image and a depth map of the current frame to acquire a virtual image and a virtual depth map, and labeling a hole area formed in the virtual depth map to extract local background information; performing a first hole-filling onto the virtual image and the virtual depth map using a similarity between the virtual static background information and the local background information; and performing a second hole-filling with respect to remaining holes after the first hole-filling in a manner of an exemplar-based in-painting method to which a priority function including a depth term is applied.
9. The non-transitory computer-readable recording medium of claim 8, wherein a step of the extracting static background information comprises: generating a codebook with respect to each frame of the input image using the color image and the depth map information corresponding to the color image; classifying codewords of the codebook into a background codeword and a foreground codeword; and extracting a background codebook for a background area of the input image.
10. The non-transitory computer-readable recording medium of claim 9, wherein a step of the extracting the static background information comprises: extracting a background image from the current frame of the input image and a background depth map corresponding to the background image as temporary background information using the background codebook; and extracting the static background information including a static background image and a static depth map of the current frame by updating the temporary background information using background information of preceding frames.
11. The non-transitory computer-readable recording medium of claim 10, wherein a step of the extracting the static background information comprises: extracting the virtual static background information including a virtual static background image and a virtual static background depth map corresponding to the virtual static background image by warping the static background image and the static depth map.
12. The non-transitory computer-readable recording medium of claim 8, wherein the labeling of a hole area formed in the virtual depth map to extract local background information comprises: extracting the local background information by labeling a rectangular area including the hole area formed in the virtual depth map, obtaining a local background depth value by identifying a background pixel and a foreground pixel according to depth information of the labeled rectangular area, and filling an area identified as the foreground pixel of the rectangular area with the local background depth value.
13. The non-transitory computer-readable recording medium of claim 8, wherein a step of the performing a first hole-filling comprises: updating the virtual image and the virtual depth map using the virtual static background information corresponding to a pixel positioned within a predetermined range from a pixel corresponding to the local background information.
14. An apparatus for performing hole-filling in 3D models, the apparatus for hole-filing in 3D models comprising: a static background information extraction unit extracting static background information from a current frame of an input image and extracting virtual static background information using the static background information; a local background information extraction unit warping a color image and a depth map of the current frame to acquire a virtual image and a virtual depth map and labeling a hole area formed in the virtual depth map to extract local background information; and a hole-filling unit performing a first hole-filling onto the virtual image and the virtual depth map using a similarity between the virtual static background information and the local background information and performing a second hole-filling with respect to remaining holes after the first hole-filing in a manner of an exemplar-based in-painting method to which a priority function including a depth term is applied.
15. The apparatus for hole-filing in 3D models of claim 14, wherein the static background information extraction unit generates a codebook with respect to each frame of the input image using the color image and the depth map information corresponding to the color image, classifies codewords of the codebook into a background codeword and a foreground codeword, and extracts a background codebook for a background area of the input image.
16. The apparatus for hole-filing in 3D models of claim 15, wherein the static background information extraction unit extracts a background image from the current frame of the input image and a background depth map corresponding to the background image as temporary background information using the background codebook; and extracts the static background information including a static background image and a static depth map of the current frame by updating the temporary background information using background information of preceding frames.
17. The apparatus for hole-filing in 3D models of claim 16, wherein the static background information extraction unit extracts the virtual static background information including a virtual static background image and a virtual static background depth map corresponding to the virtual static background image by warping the static background image and the static depth map.
18. The apparatus for hole-filing in 3D models of claim 14, wherein the local background information extraction unit extracts the local background information by labeling a rectangular area including the hole area formed in the virtual depth map, obtaining a local background depth value by identifying a background pixel and a foreground pixel according to depth information of the labeled rectangular area, and filling an area identified as the foreground pixel of the rectangular area with the local background depth value.
19. The apparatus for hole-filing in 3D models of claim 14, wherein the hole-filling unit updates the virtual image and the virtual depth map using the virtual static background information corresponding to a pixel positioned within a predetermined range from a pixel corresponding to the local background information.
20. The apparatus for hole-filing in 3D models of claim 19, wherein the hole-filling unit labels the remaining hole pixels in the updated virtual image and virtual depth map and performs the second hole-filling in the manner of the exemplar-based in-painting method while moving from a smallest hole area to a largest hole area according to a result of the labeling.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0025] The above and other objects, features and advantages of the present disclosure will become more apparent to those of ordinary skill in the art by describing exemplary embodiments thereof in detail with reference to the accompanying drawings, in which:
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
[0037] In the following detailed description, reference is made to the accompanying drawings, which show specific embodiments in which the disclosure may be practiced. These embodiments will be described in sufficient detail to enable those skilled in the art to practice the present disclosure. It should be understood that the various embodiments of the present disclosure, although different, are not necessarily mutually exclusive. For example, a particular feature, structure or characteristic described herein in connection with one embodiment may be implemented within other embodiments without departing from the spirit and scope of the present disclosure. In addition, it should be understood that the location or arrangement of individual elements within each disclosed embodiment may be modified without departing from the spirit and scope of the present disclosure. The following detailed description is, therefore, not to be taken in a limiting sense, and the scope of the present disclosure is defined only by the appended claims, appropriately interpreted, along with the full range of equivalents to which the claims are entitled. In the drawings, like numerals refer to the same or similar elements throughout several views.
[0038] The words and phrases used herein should be understood and interpreted to have a meaning consistent with the understanding of those words and phrases by those skilled in the relevant art. No special definition of a term or phrase, i.e., a definition that is different from the ordinary and customary meaning as understood by those skilled in the art, is intended to be implied by consistent usage of the term or phrase herein. For example, the following discussion contains a non-exhaustive list of definitions of several specific terms used in this disclosure (other terms may be defined or clarified in a definitional manner elsewhere herein). These definitions are intended to clarify the meanings of the terms used herein.
[0039] At least: As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements). The phrases “at least one”, “one or more”, and “and/or” are open-ended expressions that are both conjunctive and disjunctive in operation. For example, each of the expressions “at least one of A, B and C”, “at least one of A, B, or C”, “one or more of A, B, and C”, “one or more of A, B, or C” and “A, B, and/or C” means A alone, B alone, C alone, A and B together, A and C together, B and C together, or A, B and C together.
[0040] Based on: “Based on” does not mean “based only on”, unless expressly specified otherwise. In other words, the phrase “based on” describes both “based only on,” “based at least on,” and “based at least in part on.”
[0041] Comprising: In the claims, as well as in the specification, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to.
[0042] Hereinafter, preferred embodiments of the present disclosure will be described in more detail with reference to the accompanying drawings.
[0043]
[0044] Referring to
[0045] The static background information extraction unit 10 may extract static background information on a specific frame of an input image and extract virtual static environment information using the extracted static background information. This will be described with reference to
[0046]
[0047] Referring to
[0048] Specifically, in order to generate the codebook C.sup.t, the static background information extraction unit 10 may form a training set X having N pixels of the preceding frame with respect to the color image F.sup.t and the depth map M.sup.t of the current frame as expressed in Equation 1 below.
X={x.sub.1,x.sub.2,x.sub.3, . . . ,x.sub.n}={(p.sub.1,d.sub.1), . . . ,(p.sub.N,d.sub.N)}, [Equation 1]
where p.sub.n={R.sub.n, G.sub.n, B.sub.n} denotes an RGB vector, and d.sub.n denotes a depth value of a corresponding pixel.
[0049] The static background information extraction unit 10 may generate the codebook C.sup.t on the basis of the training set X. When the codebook C.sup.t has L codewords and is expressed as C={c.sub.1, c.sub.2, c.sub.3, . . . , c.sub.L}, an L-th codeword may be composed of a vector
[0050] The static background information extraction unit 10 may generate the codebook using the algorithm of
[0051] When the codebook is generated, the static background information extraction unit 10 may extract the background codebook C.sup.t.sub.BG for a background area of the input image as expressed in Equation 5 by classifying the codeword into a background codeword and a foreground codeword using a k-means cluster. In this case, when there is one list of codewords for an arbitrary pixel, the static background information extraction unit 10 may classify the codeword as the background codeword.
C.sub.BG.sup.t={c.sub.mεBG|1≦m≦M},
C.sup.t=(C.sub.BG.sup.t∪C.sub.FG.sup.t),(C.sub.BG.sup.t∩C.sub.FG.sup.t)=. [Equation 5]
[0052] When the background codebook C.sup.t.sub.BG is extracted, the static background information extraction unit 10 may make each pixel in the current frame correspond to the background codebook C.sup.t.sub.BG to extract background information including a background image of the current frame and a depth map corresponding to the background image. In this case, pixels corresponding to the foreground area may be defined as a hole area according to the background information of the current frame, which is extracted by the static background information extraction unit 10. Accordingly, the static background information extraction unit 10 may compensate for information on a background area corresponding to the foreground area using background information of preceding frames. For this, the static background information extraction unit 10 may accumulate and store background area information for each frame of the input image and define background information extracted from the current frame using the algorithm of
[0053]
[0054] Lastly, the static background information extraction unit 10 may acquire a virtual static background image VF.sup.t.sub.BG and a virtual static background depth map VM.sup.t.sub.BG corresponding to the virtual static background image VF.sup.t.sub.BG by warping the extracted static background image F.sup.t.sub.BG and static background depth map M.sup.t.sub.BG.
[0055] The local background information extraction unit 20 may warp the color image F.sup.t in the current frame and the corresponding depth map M.sup.t to acquire a virtual image VF.sup.t and a virtual depth map VM.sup.t. In this case, the local background information extraction unit 20 may label a hidden area formed in the virtual depth map VM.sup.t, that is, a hole area, to extract local background information. This will be described with reference to
[0056]
[0057] Generally, hole-filling performance is sensitive to a hole-filling order and a hole-filling window size. Thus, the local background information extraction unit 20 may label hole pixels as shown in
Ω.sub.k⊂R.sub.k. [Equation 6]
[0058] As described above, the local background information extraction unit 20 may label a rectangular area including a hole area, which is depicted as a rectangle in
[0059] In detail, first, the local background information extraction unit 20 may divide the labeled rectangular area into the sub-rectangular areas in the virtual depth map VM.sup.t. For example, the sub-rectangular areas of a k-th labeled rectangular area R.sub.k with a size of M×N may be defined as in Equation 7 below.
R.sub.k=CR.sub.k,1∪ . . . ∪CR.sub.k,z,
CR.sub.k,1∩ . . . ∩CR.sub.k,z= [Equation 7]
[0060] In Equation 7, the size of each of sub-rectangular areas CR.sub.k,1, . . . , CR.sub.k,z may be determined as Mx(N/z).
[0061] The local background information extraction unit 20 may acquire two clusters as expressed in Equation 8 below by applying a k-means cluster algorithm (k=2) to pixels other than hole pixels among the sub-rectangular areas.
CR.sub.l=CR.sub.k,l,low∪CR.sub.k,l,high,
CR.sub.k,1∩ . . . ∩CR.sub.k,z= [Equation 8]
[0062] In Equation 8, CR.sub.k,l,low and CR.sub.k,l,high denote a low depth cluster and a high depth cluster in each of the sub-rectangular areas CR.sub.K,1, . . . , CR.sub.k,z. In this case, the maximum value of the low depth cluster CR.sub.k,l,low may be defined as in Equation 9, and may be used as a depth value of a local background.
d.sub.k,l,max=max{d.sub.m|d.sub.mεCR.sub.k,l,low}. [Equation 9]
[0063] Lastly, the local background information extraction unit 20 may extract local background information VM.sup.t.sub.filled by filling pixels of a depth map VM.sup.t that satisfies conditions of Equation 10 below with the maximum value of the low depth cluster acquired through Equation 9.
[0064] As described above, the local background information extraction unit 20 may extract the local background information by labeling a rectangular area including the hole area included in the virtual depth map VM.sup.t, identifying background pixels and foreground pixels according to depth information of the rectangular area to acquire a local background depth value, and filling an area identified as the foreground pixels of the rectangular area with the local background depth value.
[0065]
[0066] The hole-filling unit 30 may perform primary hole-filling on the virtual image VF.sup.t using similarity between the virtual static background depth map VM.sup.t.sub.BG extracted by the static background information extraction unit 10 and the local background information VM.sup.t.sub.filled extracted by the local background information extraction unit 20, and perform secondary hole-filling in an exemplar-based in-painting method in which a priority function including a depth term is applied to the remaining holes after the primary hole-filling.
[0067] In detail, the hole-filling unit 30 may update the virtual image VF.sup.t using the virtual static background image VF.sup.t.sub.BG according to the similarity between the virtual static background depth map VM.sup.t.sub.BG and the local background information VM.sup.t.sub.filled as expressed in Equation 11 below, and may also update the virtual depth map VM.sup.t using the virtual static background depth map VM.sup.t.sub.BG as expressed in Equation 12 below.
[0068] In Equations 11 and 12, A and B may be defined as VM.sup.t.sub.filled (i, j)−T and VM.sup.t.sub.filled (i, j)+T, respectively, and T denotes a positive threshold.
[0069] As described above, the hole-filling unit 30 may perform hole-filling on the hole area included in the virtual depth map VM.sup.t by updating background pixels of the virtual image VF.sup.t using pixels of the virtual static background image VF.sup.t.sub.BG corresponding to pixels of the virtual static background depth map VM.sup.t.sub.BG positioned within a certain range from the local background information VM.sup.t.sub.filled. In this case, according to Equations 11 and 12, the number of pixels of the virtual static background image VF.sup.t.sub.BG that are not updated to the background pixels of the virtual image VF.sup.t increases as T decreases.
[0070] Though the background pixels of the virtual image VF.sup.t are updated using the pixels of the virtual static background image VF.sup.t.sub.BG, a plurality of hole pixels remain in the virtual image VF.sup.t. Accordingly, the hole-filling unit 30 may perform hole-filling on the hole pixels remaining in the updated virtual image VF.sup.t.sub.update in the exemplar-based in-painting method. Also, the hole-filling unit 30 may enhance a visual quality of a virtual image by initializing the hole pixels remaining in the updated virtual image VF.sup.t.sub.update using a median predictor before applying the exemplar-based in-painting method. However, since a foreground area includes a plurality of dynamic areas, values estimated by the median predictor may be applied only to a background pixel area. Accordingly, the hole-filling unit 30 may perform a process of identifying a boundary between the background area and the foreground area on the hole pixels remaining in the updated virtual image VF.sup.t.sub.update and then perform hole-filling on the remaining hole pixels in the exemplar-based in-painting method. This will be described with reference to
[0071]
[0072] In detail, in order to identify a boundary between the background area and the foreground area of each of the hole pixels remaining in the updated virtual image VF.sup.t.sub.update, the hole-filling unit 30 may set a window Ψ.sub.d (which is displayed in a rectangular area) having a size of U×V and having VM.sup.t.sub.update(m, n), which is a hole pixel remaining in the updated virtual image VF.sup.t.sub.update as shown in
Thr(Ψ.sub.d)=max(VM.sub.filled.sup.t(m,n)εΨ.sub.d). [Equation 13]
[0073] Equation 13 may be applied when VM.sup.t.sub.update(m, n)=0. According to Equation 13, hole pixels smaller than a value of Thr(Ψ.sub.d) among hole pixels in the window Ψ.sub.d of the local background information VM.sup.t.sub.filled may be regarded as the background pixels.
[0074] The hole-filling unit 30 may apply the median predictor in consideration of the window Ψ.sub.p with the size of U×V, which has VM.sup.t.sub.update(m, n) as a central pixel. In this case, in order to calculate a useful media prediction value, the hole-filling unit 30 may apply only pixels that are not holes in the window Ψ.sub.p of the updated virtual image VF.sup.t.sub.update as elements of a median filter.
T.sub.6=(VM.sub.filled.sup.t(m,n)≠0)̂(VM.sub.filled.sup.t(m,n)≦Thr(ψ.sub.d)), [Equation 14]
[0075] In Equation 14, when T.sub.6=True, an initial background value for hole pixels present in an estimated background area of the window Ψ.sub.p of the updated virtual image VF.sup.t.sub.update may be determined as in Equation 15 below.
BG.sub.ψ.sub.
[0076] In Equation 15, median is a median function that chooses a median of an arrangement.
[0077] In addition, the hole-filling unit 30 may determine an initial virtual image for applying the exemplar-based in-painting method as expressed in Equation 16 below, on the basis of Equation 15.
[0078] In Equation 16, VF.sub.init.sup.t(m,n)εψ.sub.p).
[0079] As described above, the hole-filling unit 30 may extract valid background information from adjacent pixels that are not holes in the updated virtual image VF.sup.t.sub.update according to Equations 14 to 16.
[0080] The hole-filling unit 30 may label the hole pixels of the updated virtual image VF.sup.t.sub.update according to the algorithm of
[0081] In Equation 17, |Ψ.sub.p| denotes an area of the window Ψ.sub.p of the updated virtual area VF.sup.t.sub.update, and |Φ| denotes an area of pixels that are not holes in the window area. According to Equation 17, the confidence term C(p) increases as the number of pixels that are not holes in the area of the window Ψ.sub.p increases.
[0082] In Equation 18, λ denotes a normalization factor and may be set as λ=(2.sup.n−1) when one pixel is typically represented by n bits. In addition, n.sub.p denotes a normal unit vector perpendicular to δΩ, and ∇F.sub.p.sup.⊥ denotes an outline (an isophote) having the same brightness as the center of the window Ψ.sub.p of the updated virtual image VF.sup.t.sub.update. According to Equation 18, when ∇F.sub.p.sup.⊥ and n.sub.p have the same direction, the data term D(p) has the largest value.
[0083] In this case, the hole-filling unit 30 may start to perform the hole-filling process on the background pixels and minimize geometric distortion by performing an exemplar-based hole-filling process on the basis of the priority function Pri(p) including the depth term D(p). In addition, according to
[0084] A method for hole-filling in 3D models according to an embodiment of the present disclosure will be described below with reference to
[0085] The method for hole-filling in 3D models according to an embodiment of the present disclosure may be performed using substantially the same configuration as the apparatus for hole-filing in 3D models 1 shown in
[0086] Referring to
[0087] Then, the static background information extraction unit 10 may extract static background information for the current frame of the input image and virtual static background information (S110).
[0088] In detail, the static background information extraction unit 10 may generate a codebook C.sup.t using a color image F.sup.t for each frame of the input image and depth map information M.sup.t corresponding to the color image F.sup.t. In this case, the apparatus for hole-filing in 3D models 1 according to an embodiment of the present disclosure may receive an image including the color image F.sup.t for each frame and the depth map information M.sup.t corresponding to the color image F.sup.t as the input image. In addition, the static background information extraction unit 10 may extract a background codebook C.sup.t.sub.BG from the generated codebook C.sup.t. Then, the static background information extraction unit 10 may extract background information including a background image and a background depth map for the current frame using the background codebook C.sup.t.sub.BG as temporary background information TF.sup.t.sub.BG, TM.sup.t.sub.BG. Lastly, the static background information extraction unit 10 may extract static background information F.sup.t.sub.BG, M.sup.t.sub.BG for the current frame by updating the temporary background information TF.sup.t.sub.BG, TM.sup.t.sub.BG using background information of a preceding frame.
[0089] The local background information extraction unit 20 may acquire a virtual image and a virtual depth map for the current frame of the input image and label a hole area formed in the virtual depth map to extract local background information (S120).
[0090] In detail, the local background information extraction unit 20 may extract the local background information by labeling a rectangular area including the hole area included in the virtual depth map VM.sup.t, identifying background pixels and foreground pixels according to depth information of the rectangular area to acquire a local background depth value, and filling an area identified as the foreground pixels of the rectangular area with the local background depth value.
[0091] In addition, the hole-filling unit 30 may perform primary hole-filling on the virtual image and the virtual depth map using the virtual static background information and the local background information (S130). Then, the hole-filling unit 30 may perform secondary hole-filling on the remaining pixels after the primary hole-filling (S140).
[0092] In detail, the hole-filling unit 30 may perform the primary hole-filling on a virtual image VF.sup.t using similarity between a virtual static background depth map VM.sup.t.sub.BG extracted by the static background information extraction unit 10 and a local background information VM.sup.t.sub.filled extracted by the local background information extraction unit 20, and perform the secondary hole-filling in an exemplar-based in-painting method in which a priority function including a depth term is applied to the remaining holes after the primary hole-filling.
[0093] According to one aspect of the present disclosure, it is possible to efficiently fill hole pixels formed in a virtual-viewpoint image by estimating reliable static background information and local background information and performing a hole-filling process on the basis of the estimation, and it is also possible to compose a virtual-viewpoint image in which visual distortion is minimized irrespective of properties or hole types of an input image by applying an exemplar-based in-painting method according to a priority function including a depth term to perform a hole-filling process on the remaining holes.
[0094] As described above, the method for hole-filling in 3D models may be implemented as an application or implemented in the form of program instructions that may be executed through various computer components and recorded on a computer-readable recording medium. The computer-readable recording medium may also include program instructions, data files, data structures, or combinations thereof.
[0095] The program instructions recorded on the computer-readable recording medium may be specially designed for the present disclosure or may be well known to those skilled in the art of software.
[0096] Examples of the computer-readable recording medium include a magnetic medium, such as a hard disk, a floppy disk, and a magnetic tape, an optical medium, such as a compact disc read-only memory (CD-ROM), a digital versatile disc (DVD), etc., a magneto-optical medium, such as a floptical disk, and a hardware device specially configured to store and perform program instructions, for example, a ROM, a random access memory (RAM), flash memory, etc.
[0097] Examples of the program instructions include machine codes made by, for example, a compiler, as well as high-level language codes executable by a computer, using an interpreter. The above exemplary hardware device may be configured to operate as one or more software modules in order to perform processing according to the present disclosure, and vice versa.
[0098] While the example embodiments of the present disclosure and their advantages have been described in detail, it should be understood that various changes, substitutions and alterations may be made herein without departing from the scope of the disclosure.