TRANSMISSIVE DISPLAY APPARATUS AND IMAGE COMBINING METHOD THEREIN
20230230561 · 2023-07-20
Assignee
Inventors
Cpc classification
International classification
Abstract
Provided is a transmissive display apparatus capable of naturally combining a projection image on a real space scene. A beam splitter splits a light incident from a real space scene into first and second portions. An image sensor generates a real space image according to the second portion of the incident light. A light modulator passes at least a part of the first portion of the incident light according to a masking image. A control and projection image generation unit selects a target object and generates the masking image based on a target object area to cause a light modulator to block a foreground of the real space scene and pass a background of the real space scene. A projection unit generates a projection image light corresponding to the projection image combines the projection image light with a part of the first portion of the incident light.
Claims
1. A display apparatus, comprising: a beam splitter configured to split a light incident from a real space scene in front of the display apparatus to transmit a first portion of the incident light from the real space scene and reflects a second portion of the incident light from the real space scene; an image sensor configured to generate a real space image according to the second portion of the incident light; a light modulator having a transmittance that is variable for each pixel according to a masking image and configured to pass at least a part of the first portion of the incident light according to the transmittance; a control and projection image generation unit configured to generate the masking image and a projection image according to the real space image; and a projection unit configured to receive the projection image, generate a projection image light corresponding to the projection image, and combine the projection image light with the at least a part of the first portion of the incident light transmitting the light modulator, wherein the control and projection image generation unit selects a target object to be deleted from the real space scene, and generate the masking image based on a target object area associated with the target object so as to cause the light modulator to block a foreground of the real space scene represented by the first portion of the incident light and pass a background of the real space scene.
2. The display apparatus of claim 1, wherein the projection unit comprises: a display configured to receive the projection image to generate the projection image light corresponding to the projection image; and a beam combiner configured to combine the projection image light with the at least a part of the first portion of the incident light transmitting the light modulator.
3. The display apparatus of claim 1, wherein the control and projection image generation unit comprises: a memory storing program instructions; and a processor coupled to the memory and executing the program instructions stored in the memory, wherein the program instructions, when executed by the processor, causes the processor to: receive the real space image; select a target object; detect a target object area associated with the target object in the real space image; generate the masking image based on the target object area so as to cause the light modulator to block the foreground of the real space scene and pass the background of the real space scene; and generate the projection image based on the real space image to provide the projection image to the projection unit.
4. The display apparatus of claim 1, wherein the projection image is an inpainting image generated by removing the target object area from the real space image and filling pixels in the target object area with different pixel values using information on the area around the target object area.
5. The display apparatus of claim 4, wherein the program instructions causing the processor to generate the projection image comprises program instructions causing the processor to: correct the inpainting image to reduce a distortion in a combined image.
6. The display apparatus of claim 5, wherein the program instructions causing the processor to correct the inpainting image comprises program instructions causing the processor to: calculate the inpainting image; apply an inverse function of nonlinear color response characteristics of the image sensor to the target object area the inpainting image; apply an attenuation experienced by the incident light from the real space scene in an optical system of the display apparatus; and apply an inverse function of a distortion in the projection unit to the incident light from the real space scene with the attenuation applied to obtain a corrected inpainting image.
7. The display apparatus of claim 3, wherein the program instructions causing the processor to select the target object causes the processor to automatically select the target object.
8. The display apparatus of claim 3, wherein the program instructions causing the processor to select the target object causes the processor to select the target object in response to a user input.
9. A method of combining a light incident from a real space scene in front of a display apparatus with a projection image light in the display apparatus, the method comprising: splitting the incident light from the real space scene to transmit a first portion of the incident light from the real space scene and generate a real space image according to a second portion of the incident light from the real space scene; selecting a target object to be deleted from the real space scene and detecting a target object area associated with the target object; generating a projection image according to the real space image; generating the masking image based on the target object area and blocking a foreground of the real space scene represented by the first portion of the incident light in a path of the incident light from the real space scene according to the masking image while allowing a background of the real space scene to pass; and generating a projection image light corresponding to the projection image and combining the projection image light with a part of the first portion of the incident light in which the foreground is removed.
10. The method of claim 9, wherein the projection image is an inpainting image generated by removing the target object area from the real space image and filling pixels in the target object area with different pixel values using information on the area around the target object area.
11. The method of claim 10, wherein generating projection image comprises: correcting the inpainting image to reduce a distortion in a combined image.
12. The method of claim 11, wherein correcting the inpainting image comprises: calculating the inpainting image; applying an inverse function of nonlinear color response characteristics of the image sensor to the target object area the inpainting image; applying an attenuation experienced by the incident light from the real space scene in an optical system of the display apparatus; and applying an inverse function of a distortion in the projection unit to the incident light from the real space scene with the attenuation applied to obtain a corrected inpainting image.
13. The method of claim 9, wherein the target object is automatically selected by program instructions executed in the display apparatus.
14. The method of claim 9, wherein the target object is selected in response to a user input.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0024] In order that the disclosure may be well understood, there will now be described various forms thereof, given by way of example, reference being made to the accompanying drawings, in which:
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033] The drawings described herein are for illustration purposes only and are not intended to limit the scope of the present disclosure in any way.
DETAILED DESCRIPTION
[0034] For a clearer understanding of the features and advantages of the present disclosure, exemplary embodiments of the present disclosure will be described in detail with reference to the accompanied drawings. However, it should be understood that the present disclosure is not limited to particular embodiments disclosed herein but includes all modifications, equivalents, and alternatives falling within the spirit and scope of the present disclosure. In the drawings, similar or corresponding components may be designated by the same or similar reference numerals.
[0035] The terminologies including ordinals such as “first” and “second” designated for explaining various components in this specification are used to discriminate a component from the other ones but are not intended to be limiting to a specific component. For example, a second component may be referred to as a first component and, similarly, a first component may also be referred to as a second component without departing from the scope of the present disclosure. As used herein, the term “and/or” may include a presence of one or more of the associated listed items and any and all combinations of the listed items.
[0036] In the description of exemplary embodiments of the present disclosure, “at least one of A and B” may mean “at least one of A or B” or “at least one of combinations of one or more of A and B”. In addition, in the description of exemplary embodiments of the present disclosure, “one or more of A and B” may mean “one or more of A or B” or “one or more of combinations of one or more of A and B”.
[0037] When a component is referred to as being “connected” or “coupled” to another component, the component may be directly connected or coupled logically or physically to the other component or indirectly through an object therebetween. Contrarily, when a component is referred to as being “directly connected” or “directly coupled” to another component, it is to be understood that there is no intervening object between the components. Other words used to describe the relationship between elements should be interpreted in a similar fashion.
[0038] The terminologies are used herein for the purpose of describing particular exemplary embodiments only and are not intended to limit the present disclosure. The singular forms include plural referents as well unless the context clearly dictates otherwise. Also, the expressions “comprises,” “includes,” “constructed,” “configured” are used to refer a presence of a combination of stated features, numbers, processing steps, operations, elements, or components, but are not intended to preclude a presence or addition of another feature, number, processing step, operation, element, or component.
[0039] Unless defined otherwise, all terms used herein, including technical or scientific terms, have the same meaning as commonly understood by those of ordinary skill in the art to which the present disclosure pertains. Terms such as those defined in a commonly used dictionary should be interpreted as having meanings consistent with their meanings in the context of related literatures and will not be interpreted as having ideal or excessively formal meanings unless explicitly defined in the present application.
[0040] Exemplary embodiments of the present disclosure will now be described in detail with reference to the accompanying drawings. In order to facilitate general understanding in describing the present disclosure, the same components in the drawings are denoted with the same reference signs, and repeated description thereof will be omitted.
[0041]
[0042] The display apparatus may include abeam splitter 10, an image sensor 20, alight modulator 30, a control and projection image generation unit 40, and a projection unit 70. In an exemplary embodiment, the display apparatus may be configured so that a user can recognize the light with both eyes, and two configurations each identical to that of
[0043] The beam splitter 10 splits the light incident from a front real space (hereinafter, referred to as ‘light from the real space scene’ and denoted by ‘L.sub.scene’) to transmit most portion of the light from the real space scene L.sub.scene backwards, i.e., toward the eye of the user and reflect a remaining portion of the light from the real space scene L.sub.scene toward the image sensor 20. A flat plate-type splitter may be used for the beam splitter 10. However, the shape of the splitter is not limited thereto, and a cube-shaped splitter or other type of splitter may be used as well.
[0044] The image sensor 20 may convert the light portion of the light from the real space scene L.sub.scene reflected by the beam splitter 10 into an electrical signal to generate a real space image signal I.sub.scene.
[0045] The light modulator 30 has a transmittance changing according to a masking image signal I.sub.mask for each pixel so as to transmit the light portion of the light from the real space scene L.sub.scene transmitted by the beam splitter 10 according to the transmittance for each pixel. The masking image signal I.sub.mask may indicate an individual transmittance for each of the pixels in the light modulator 30. The light modulator 30 may be implemented using, for example, a liquid crystal display (LCD).
[0046] The control and projection image generation unit 40 may receive the real space image signal I.sub.scene and generate the masking image signal I.sub.mask. In addition, the control and projection image generation unit 40 may generate a projection image signal I.sub.proj to provide to the projection unit 70.
[0047] The projection unit 70 may receive the projection image signal I.sub.proj and generate a projection image light L.sub.proj corresponding to the projection image signal I.sub.proj. In addition, the projection unit 70 may superimpose and combines the projection image light L.sub.proj on the light from the real space scene L.sub.scene having passed the beam splitter 10 and the light modulator 30 to emit a combined light as a user-perceivable light L.sub.user backwards, i.e., toward the eye of the user.
[0048] In the exemplary embodiment, the projection unit 70 may include a display 72 and a beam combiner 74. The display 72 may receive the projection image signal I.sub.proj from the control and projection image generation unit 40 and output the projection image light L.sub.proj corresponding to the projection image signal I.sub.proj. The beam combiner 74 may transmit the light from the real space scene L.sub.scene having passed the beam splitter 10 and the light modulator 30 and reflect the projection image light L.sub.proj from the display 72 toward the eye of the user. Accordingly, the beam combiner 74 may superimpose and combines the projection image light L.sub.proj on the light from the real space scene L.sub.scene to emit the combined light as the user-perceivable light L.sub.user toward the eye of the user.
[0049]
[0050] The control and projection image generation unit 40 may include at least one processor 42, a memory 44, a communication interface 48, an input interface 50, and an output interface device 52. The control and projection image generation unit 40 may further include a storage device 46. The components of the caricature generating apparatus may be connected to each other by a bus.
[0051] The processor 42 may execute program instructions stored in the memory 44 and/or the storage device 46. The processor 42 may include at least one central processing unit (CPU) or a graphics processing unit (GPU), or may be implemented by another kind of dedicated processor suitable for performing the method of the present disclosure.
[0052] The memory 44 may include, for example, a volatile memory such as a random access memory (RAM), and a non-volatile memory such as a read only memory (ROM). The memory 44 may load the program instructions stored therein or the program instructions stored in the storage device 46 to the processor 520 so that the processor 42 may execute the program instructions. The storage device 46 may be a recording medium suitable for storing the program instructions and data, and may include, for example, a flash memory, an erasable programmable ROM (EPROM), or a semiconductor memory such as a solid-state drive (SSD) manufactured therefrom.
[0053] The program instructions, when executed by the processor 42, may cause the processor 42 to receive the real space image, select a target object, detect a target object area associated with the target object in the real space image, generate the masking image based on the target object area so as to cause the light modulator to block the foreground of the real space scene and pass the background of the real space scene, and generate the projection image based on the real space image to provide the projection image to the projection unit.
[0054] The communication interface 48 enables the control and projection image generation unit 40 to communicate with an external device and may include an Ethernet interface, a WiFi interface, a Bluetooth interface, an RF interface for allowing communications according to a certain wireless protocol such as a 4G LTE, 5G NR, or 6G communication protocol, or another interface. The input interface 50 enables the user to input operation commands or information. In addition, the input interface 50 may receive the real space image signal I.sub.scene from the image sensor 20 to provide to the processor 42. The input interface 50 may include at least one button or a touch sensor. The output interface 52 may display an operating state or guide information of the display apparatus. In addition, the output interface 52 may supply the masking image signal I.sub.mask and the projection image signal I.sub.proj to the light modulator 30 and the projection unit 70, respectively. The input interface 50 and the output interface 52 may allow one or more cameras or another other external device to be connected to the control and projection image generation unit 40 as described below.
[0055] In an exemplary embodiment, the control and projection image generation unit 40 may be a data processing device such as a personal computer (PC) and a smartphone. The control and projection image generation unit 40 may be configured in a compact form based on an embedded CPU.
[0056]
[0057] The target object selection and tracking module 60 may select a target object to be deleted from the real space scene. In an exemplary embodiment, the target object selection and tracking module 60 may automatically select the target object that meets a certain requirement in the real space image I.sub.scene. In an exemplary embodiment, the target object selection and tracking module 60 may include an artificial neural network trained to select the target object. Alternatively, however, the target object selection and tracking module 60 may select the target object in response to a user input. That is, the target object selection and tracking module 60 may select the target object in the real space image I.sub.scene according to a detection of a button input, a touch screen input, or a gesture of the user. Once the target object is selected, the target object selection and tracking module 60 may track the target object in frame sequences of the real space image I.sub.scene until the target setting is released.
[0058] The target object area detection module 62 may detects a target object area occupied by the target object in the real space image I.sub.scene. As described above, the target object may be automatically detected by program instructions or may be manually specified by the user. The user may select the target object to be deleted from the real space image through the input interface 50. The target object area includes all pixels belonging to the selected target object. The target object area may be detected by a known method based on at least one of pixel values of pixels around the target object, edges of the objects, textures of the objects.
[0059] The mask generation module 64 may generate the masking image I.sub.mask, based on target object area information from the target object area detection module 62, that controls the transmittance of the light modulator 30 so that the light modulator 30 may block the light from the real space scene L.sub.scene for a target object area. The transmittance of the light modulator 30 may be set independently for each pixel according to the masking image I.sub.mask. For example, in case that the light from the real space scene L.sub.scene is to be transmitted to the user with the target object area blocked, the masking image I.sub.mask may be configured such that the transmittance of the light modulator 30 may be 0% at pixels belonging to the target object area while the transmittance may have a value of 100% at pixels belonging to a background area other than the target object area.
[0060] The target object area deletion module 66 may delete the target object area from the real space image I.sub.scene received from the image sensor 20. The deletion of the target object area may be performed for a substitution of the target object area in the real space image I.sub.scene with the projection image I.sub.proj in the target object area. The projection image I.sub.proj for the target object area may be used to generate the projection image light L.sub.proj, which is to be superimposed with the light from the real space scene L.sub.scene from which the portion for the target object area has been excluded. Meanwhile, the mask generation module 64 may construct the masking image I.sub.mask based on the real space scene I.sub.scene from which the target object area has been removed by the target object area deletion module 66 instead of the target object area information from the target object area detection module 62.
[0061] The projection image generation module 68 may generate the projection image I.sub.proj for generating the projection image light L.sub.proj to be superimposed on the light from the real space scene L.sub.scene passing through the beam splitter 10 and the light modulator 30. In particular, when transmitting the light from the real space scene L.sub.scene to the user in a state that the portion of the target object area has been removed, the projection image generation module 68 may generate an inpainting image I.sub.inpaint by filling each pixel of the target object area with a different pixel value using information on the pixels around the target object area in the real space scene L.sub.scene from which the target object area has been deleted. The projection image generation module 68 may output the inpainting image I.sub.inpaint to the projection unit 70 as the projection image I.sub.proj. Further, the projection image generation module 68 may correct the inpainting image I.sub.inpaint and output a corrected inpainting image I.sub.inpaint_comp to reduce a distortion in the superimposed image.
[0062] An operation of the display apparatus according to an exemplary embodiment of the present invention will now be described in more detail.
[0063] Modeling and Determination of Parameters
[0064] First, a mathematical model for the display apparatus and a parameter determination method according to an exemplary embodiment will be described.
[0065] In the display apparatus shown in
[0066] where ‘α’ and ‘β’ denote weights, and a small circle (○) indicates a pixelwise multiplication operator. ‘d( )’ represents the degree of attenuation that the light from the real space scene L.sub.scene experiences in the optical system and may be different for each pixel. ‘T.sub.lm( )’ is the transmittance of the light modulator 30 and may vary according to a control signal, i.e., the masking image signal I.sub.mask. ‘R.sub.om( )’ represents the degree of distortion that the projection image signal I.sub.proj suffers in the process of being converted into light by the display 72 and reflected by the beam combiner 74 in the projection unit 70.
[0067] Accordingly, once the attenuation d( ) in the optical system, the transmittance T.sub.lm( ) of the light modulator 30, and the degree of distortion R.sub.om( ) in the projection unit 70 are determined, the user-perceivable light L.sub.user may be calculated by Equation 1.
[0068] In an exemplary embodiment, a camera radiometric calibration, for example, may be performed before the display apparatus is assembled to estimate the attenuation d( ), the transmittance T.sub.lm( ), and the degree of distortion R.sub.om( ). In general, when a light entering the camera is converted to an electrical signal representing an intensity of the image, the signal is distorted by nonlinear color response characteristics, gcam, of the camera. The camera radiometric calibration refers to a process of estimating the nonlinear color response characteristics by a linear model, for example, and applying an inverse function of the nonlinear color response characteristics to the image signal. However, the method of determining the attenuation d( ), the transmittance T.sub.lm( ), and the degree of distortion R.sub.om( ) is not limited to the camera radiometric calibration.
[0069]
[0070] The attenuation d( ) may be estimated based on the image displayed on the monitor 100 and the user-perceivable light L.sub.user captured by the second camera 120. In detail, Equation 1 may be changed into a form of Equation 2 in this case.
[0071] That is, since the projection image signal I.sub.proj and the projection image light L.sub.proj are not generated in the projection unit 70, the representation of the user-perceivable light L.sub.user may be simplified as in Equation 2. In case that the color response characteristics, gcam, (i.e., color image intensity characteristics with respect to scene radiances) of the first camera (i.e., the image sensor) 20 and the second camera 120 are already known, Equation 2 may be modified into Equation 3 to express the user-perceivable light L.sub.user using an inverse function, gcam.sup.−1, of the color response characteristics, gcam.
L.sub.user=d(L.sub.scene) g.sub.cam2.sup.−1(I.sub.user)=d(g.sub.cam1.sup.−1(I.sub.scene)) [Equation 3]
[0072] As can be seen in Equation 3, the attenuation d( ) may be determined by estimating the user-perceivable light L.sub.user based on the image signal I.sub.user detected by the second camera 120 and the color response characteristics, gcam2, of the second camera 120, estimating the light from the real space scene L.sub.scene based on the image signal I.sub.scene detected by the first camera 20 and the color response characteristics, gcam, of the first camera 20, and calculating pixelwise ratios of the intensities of the two image signals, i.e., (g.sub.cam2.sup.−1(I.sub.user)) and (g.sub.cam1.sup.−1(I.sub.scene)).
[0073]
[0074] The transmittance T.sub.lm( ) may be estimated based on the image displayed on the monitor 100 and the user-perceivable light L.sub.user captured by the second camera 120. In detail, Equation 1 may be changed into a form of Equation 4 in this case.
[0075] That is, since the projection image signal I.sub.proj and the projection image light L.sub.proj are not generated in the projection unit 70 and the white light is used as the light from the real space scene L.sub.scene, the representation of the user-perceivable light L.sub.user may be simplified as in Equation 4. In case that the color response characteristics, gcam, of the second camera 120 is already known, Equation 4 may be modified into Equation 5 to express the user-perceivable light L.sub.user using an inverse function, gcam2.sup.−1, of the color response characteristics of the second camera 120.
L.sub.user=T.sub.lm(I.sub.mask) g.sub.cam2.sup.−1(I.sub.user)=T.sub.lm(I.sub.mask) [Equation 5]
[0076] As can be seen in Equation 5, the transmittance T.sub.lm( ) may be determined by estimating the user-perceivable light L.sub.user based on the image signal I.sub.user detected by the second camera 120 and the color response characteristics, gcam2, of the second camera 120 and calculating pixelwise ratios of the intensities of the two image signals, i.e., (g.sub.cam2.sup.−1(I.sub.user)) and L.sub.mask containing the known color pattern.
[0077]
[0078] The degree of distortion R.sub.om( ) may be estimated based on the projection image light L.sub.proj and the user-perceivable light L.sub.user captured by the second camera 120. In detail, Equation 1 may be changed into a form of Equation 6 in this case.
[0079] That is, since no light from the real space scene L.sub.scene is incident on the display device, the representation of the user-perceivable light L.sub.user may be simplified as in Equation 6. In case that the color response characteristics, gcam2, of the second camera 120 is already known, Equation 6 may be modified into Equation 7 to express the user-perceivable light L.sub.user using the inverse function, gcam2.sup.−1, of the color response characteristics of the second camera 120.
L.sub.user=R.sub.om(I.sub.proj) g.sub.cam2.sup.−1(I.sub.user)=R.sub.om(I.sub.proj) [Equation 7]
[0080] As can be seen in Equation 7, the degree of distortion R.sub.om( ) in the projection unit 70 may be determined by estimating the user-perceivable light L.sub.user based on the image signal I.sub.user detected by the second camera 120 and the color response characteristics, gcam2, of the second camera 120 and calculating pixelwise ratios of the intensities of the two image signals, i.e., (g.sub.cam2.sup.−1(I.sub.user)) and I.sub.proj containing the known color pattern.
[0081] In the process of estimating the attenuation d( ) in the optical system, the transmittance T.sub.lm( ) of the light modulator 30, and the degree of distortion R.sub.om( ) in the projection unit 70, it is assumed that the output of the display apparatus is calibrated in advance before the light from the real space scene L.sub.scene, the masking image signal I.sub.mask, or the projection image signal I.sub.proj is applied to the display apparatus. Further, each of the functions used for the estimations may be calculated and determined only once in advance to be used as uniform functions while the optical system is maintained.
[0082] Generation of Projection Image with Target Object Removed
[0083] As described above, the projection image signal I.sub.proj representing the projection image light L.sub.proj to be combined with the light from the real space scene L.sub.scene is generated by the control and projection image generation unit 40. Further, the projection unit 70 may generate the projection image light L.sub.proj corresponding to the projection image signal I.sub.proj. Among the various types of projection images, an image generated by removing the target object area from the real space image I.sub.scene and filling pixels in the target object area with different pixel values using information on the area around the target object area is referred to as an ‘inpainting image’ herein. In addition, an image signal representing the inpainting image I.sub.inpaint will be referred to as an ‘inpainting image signal’. Further, the indication ‘I.sub.inpaint’ will be used herein to denote both the inpainting image and the inpainting image signal similarly to the other images and signals. The inpainting image I.sub.inpaint is a kind of the projection images generated by the control and projection image generation unit 40.
[0084] Unlike another kind of display apparatus which is capable of completely removing a portion of an image, it is difficult to completely substitute a portion of an image with another one in the transmissive display apparatus. That is, when the projection image light L.sub.proj corresponding to the projection image I.sub.proj such as the inpainting image I.sub.inpaint is simply overlapped to the light from the real space scene L.sub.scene, the target object area to be removed from the light from the real space scene L.sub.scene may remain in the light from the real space scene L.sub.scene after the overlapping, which may result in a distortion of information and cause a confusion to the user.
[0085] To solve such a problem, in an exemplary embodiment of the present invention, a foreground portion L.sub.scene_fg corresponding to the target object area in the light from the real space scene L.sub.scene is blocked by the light modulator 30, and only a background portion L.sub.scene_bg is left in the light from the real space scene L.sub.scene. Afterwards, an inpainting image light L.sub.inpaint is overlapped with the background portion L.sub.scene_bg of the real space scene L.sub.scene.
[0086] In this regard, Equation 1 may be rearranged by dividing it into a foreground part and a background part as shown in Equation 8.
L.sub.user_fg=d(L.sub.scene_fg)○T.sub.lm(0)+R.sub.om(I.sub.inpaint_fg)=R.sub.om(I.sub.inpaint_fg) L.sub.user_bg=d(L.sub.scene_bg)○T.sub.lm(255)+R.sub.om(0)=d(L.sub.scene_bg) [Equation 8]
[0087]
[0088] Referring to Equations 1 and 8 and
[0089] In the foreground portion L.sub.inpaint_fg of the inpainting image generated by the control and projection image generation unit 40, the target object area is removed from the real space image I.sub.scene and the pixels in the target object area are filled with different pixel values using the information on the area around the target object area. On the other hand, all pixels in the background portion L.sub.inpaint_bg of the inpainting image may have values of zero.
[0090] The user-perceivable light L.sub.user is a sum of the portion of light from the real space scene L.sub.scene transmitting the light modulator 30 and the inpainting image light L.sub.inpaint, i.e., R.sub.om(I.sub.inpaint_fg). Accordingly, the foreground portion L.sub.user_fg of the user-perceivable light L.sub.user is the same as the foreground portion L.sub.inpaint_fg of the inpainting image since the light from the real space scene L.sub.scene is blocked due to the masking of the light modulator 30 and is not included in that portion. Meanwhile, the background portion L.sub.user_bg of the user-perceivable light L.sub.user is almost the same as background portion of the light from the real space scene L.sub.scene possibly with some deviation due to the unideal factors related with the attenuation, transmittance, and distortion described above, and the component of the inpainting image light L.sub.inpaint is not included in that portion.
[0091] Since the generation of the inpainting image light L.sub.inpaint may involve the deletion of the target object area using the information on the target object area itself as well as the information on the pixels around the target object, the component R.sub.om(I.sub.inpaint) of the inpainting image I.sub.inpaint in the user-perceivable light L.sub.user may be expressed by Equation 9.
R.sub.om(I.sub.inpaint)=R.sub.om(I.sub.proj)←I.sub.scene=g.sub.cam(L.sub.scene) [Equation 9]
[0092] where gcam denotes the color response characteristics of the image sensor 20.
[0093] However, even when the inpainting image is generated in the manner described above, the user-perceivable light L.sub.user from which the target object is actually removed is different from an image in which the target object have not existed from the beginning. That is, when the inpainting image light L.sub.inpaint from which the target object is removed is projected through the beam combiner 74, the distortion may be inevitable unless the deleted object is a white object. It is similar to a case, for example, that colors or color patterns of a projected image projected on a wall may vary according to the color or color pattern of the wall unless the wall is colored in white. In an exemplary embodiment, the inpainting image light L.sub.inpaint used for generating the inpainting image light L.sub.inpaint to be projected to the beam combiner 74 may be corrected to solve the above problem. A corrected inpainting image foreground portion I.sub.inpaint_fg_comp may be obtained through Equation 10 through 12.
[0094] If it is assumed that the target object is actually deleted, the user-perceivable light foreground portion L.sub.user_fg_diminish may be expressed by Equation 10 from Equation 8.
[0095] Ideally, the foreground portion L.sub.user_fg of the user-perceivable light L.sub.user in Equation 8 is to be the same as the user-perceivable light foreground portion L.sub.user_fg_diminish in Equation 10. Accordingly, the corrected inpainting image foreground portion I.sub.inpaint_fg_comp may be expressed by Equation 11.
R.sub.om(I.sub.inpaint_fg_comp)=d(L.sub.scene_fg_diminish) I.sub.inpaint_fg_comp=R.sub.om.sup.−1(d(L.sub.scene_fg_diminish)) [Equation 11]
[0096] Taking Equation 9 into account, the corrected inpainting image foreground portion I.sub.inpaint_fg_comp may be expressed by Equation 12.
[0097] That is, the corrected inpainting image foreground portion I.sub.inpaint_fg_comp may be derived by removing the target object area from the real space image I.sub.scene, filling the pixels in the target object area with different pixel values using information on the area around the target object area to acquire the inpainting image I.sub.inpaint, applying an inverse function of the nonlinear color response characteristics of the image sensor 20 to the foreground portion I.sub.inpaint_fg of the inpainting image I.sub.inpaint, applying the attenuation d( ) in the optical system, and applying an inverse function of the distortion R.sub.om( ) in the projection unit 70.
[0098] Process of Generating Diminished Reality Image
[0099]
[0100] First, parameters indicating the characteristics of the display apparatus are determined (operation 200). The parameters may include the attenuation d( ) in the optical system, the transmittance T.sub.lm( ) of the light modulator 30, the degree of distortion R.sub.om( ) in the projection unit 70, and the color response characteristics gcam of the image sensor 20.
[0101] If the user wears the display apparatus having the shape of the glasses, for example, and watches the real space (operation 210), the light from the real space scene L.sub.scene is incident on the beam splitter 10. A portion of the light from the real space scene L.sub.scene is reflected by the beam splitter 10 to be split and incident on the image sensor 20. The portion of the light from the real space scene L.sub.scene split by the beam splitter 10 is converted into real space image signal I.sub.scene which is an electrical signal by the image sensor 20 (operation 220).
[0102] Afterwards, a target object to be deleted from the real space scene L.sub.scene may be selected (operation 230). In an exemplary embodiment, the target object may be selected by a program executed in the control and projection image generation unit 40. In an exemplary embodiment, the program may include a trained artificial neural network. Alternatively, however, the target object may be manually selected by the user. The user may select the target object in the real space scene by manipulating a button or a touch switch provided as the input interface 50 or by utilizing a gesture sensing function provided by the display apparatus.
[0103] After the target object is determined, the target object area including all pixels associated with the target object in the real space image I.sub.scene from the image sensor 20 may be detected (operation 240). In addition, once the target object is selected, the target may be traced until the target setting is released. As the position of the target object in the real space scene varies due to a translational movement or rotation of the display apparatus or a movement of the target object, the position and size of the target object area may be continuously updated by the tracking function.
[0104] Subsequently, the projection image I.sub.proj for forming the projection image light L.sub.proj to be combined with the light from the real space scene L.sub.scene may be generated (operation 250). The projection image I.sub.proj may have various forms. One example of the projection image I.sub.proj may be the inpainting image I.sub.inpaint which may be formed by removing the target object area from the real space image I.sub.scene and filling pixels in the target object area with different pixel values using information on the area around the target object area. Such an inpainting image I.sub.inpaint may be used in diminished reality applications. Further, in order to reduce the distortion of the combined image, the inpainting image I.sub.inpaint may be corrected to yield the corrected inpainting image I.sub.inpaint_comp as the projection image I.sub.proj.
[0105] After the projection image I.sub.proj is generated, the masking image I.sub.mask may be generated based on the target object area detected in the operation 240 and provided to the light modulator 30 (operation 260). Accordingly, the light modulator 30 may completely block the light from the real space scene L.sub.scene to pass only the background area portion in the target object area. In addition, the display apparatus may combine the corrected inpainting image light L.sub.inpaint_comp with the light from the real space scene L.sub.scene having passed the beam splitter 10 and the light modulator 30 to output the combined light as the user-perceivable light L.sub.user to the user (operation 270). Accordingly, a diminished reality image light which is similar to the real space scene L.sub.scene except that the target object area is removed may be perceived by the eye of the user.
[0106] As mentioned above, the apparatus and method according to exemplary embodiments of the present disclosure can be implemented by computer-readable program codes or instructions stored on a computer-readable intangible recording medium. The computer-readable recording medium includes all types of recording device storing data which can be read by a computer system. The computer-readable recording medium may be distributed over computer systems connected through a network so that the computer-readable program or codes may be stored and executed in a distributed manner.
[0107] The computer-readable recording medium may include a hardware device specially configured to store and execute program instructions, such as a ROM, RAM, and flash memory. The program instructions may include not only machine language codes generated by a compiler, but also high-level language codes executable by a computer using an interpreter or the like.
[0108] Some aspects of the present disclosure described above in the context of the device may indicate corresponding descriptions of the method according to the present disclosure, and the blocks or devices may correspond to operations of the method or features of the operations. Similarly, some aspects described in the context of the method may be expressed by features of blocks, items, or devices corresponding thereto. Some or all of the operations of the method may be performed by use of a hardware device such as a microprocessor, a programmable computer, or electronic circuits, for example. In some exemplary embodiments, one or more of the most important operations of the method may be performed by such a device.
[0109] In some exemplary embodiments, a programmable logic device such as a field-programmable gate array may be used to perform some or all of functions of the methods described herein. In some exemplary embodiments, the field-programmable gate array may be operated with a microprocessor to perform one of the methods described herein. In general, the methods are preferably performed by a certain hardware device.
[0110] The description of the disclosure is merely exemplary in nature and, thus, variations that do not depart from the substance of the disclosure are intended to be within the scope of the disclosure. Such variations are not to be regarded as a departure from the spirit and scope of the disclosure. Thus, it will be understood by those of ordinary skill in the art that various changes in form and details may be made without departing from the spirit and scope as defined by the following claims.