Method for modifying steroscopic pairs of images and apparatus

11595629 · 2023-02-28

Assignee

Inventors

Cpc classification

International classification

Abstract

The present disclosure provides a method for modifying pairs of images for improved three-dimensional displaying. The method comprises analyzing the images for detecting the angle of incidence of the illuminating light in the images, and modifying the luminance values of at least a section of at least one of the images of each pair based on the angle of incidence and anatomical details of a human face. Further, the present invention provides a respective apparatus.

Claims

1. A method for modifying pairs of images for improved three-dimensional displaying, the method comprising: analyzing the images for detecting an angle of incidence of illuminating light in the images; and modifying luminance values of at least a section of at least one of the images of each pair based on the angle of incidence and anatomical details of a human face, wherein the anatomical details of the human face refer to a human nose; wherein the analyzing comprises analyzing one image of each pair of images and determining the angle of incidence based on luminance distribution and/or luminance histogram for the respective image, and wherein the determining the angle of incidence comprises extracting objects in the respective image and determining the luminance distribution in the extracted objects.

2. The method according to claim 1, comprising determining a main light source in the respective image, wherein the angle of incidence is determined for the light of the main light source.

3. The method according to claim 1, wherein the determining the angle of incidence comprises determining the luminance distribution in the respective image for surrounding spaces of the extracted objects.

4. The method according to claim 1, wherein modifying the luminance values comprises only modifying the luminance values if the angle of incidence indicates a position of a light source of the incident light that is left or right of an image center.

5. The method according to claim 1, wherein modifying the luminance values comprises darkening a left image under a line that divides the left image starting from a right top corner of the left image, an angle of the line being defined by the angle of incidence, or darkening a right image under a line that divides the right image starting from a left top corner of the right image, an angle of the line that divides the right image being defined by the angle of incidence.

6. A method for modifying pairs of images for improved three-dimensional displaying, the method comprising: analyzing the images for detecting an angle of incidence of illuminating light in the images; and modifying luminance values of at least a section of at least one of the images of each pair based on the angle of incidence and anatomical details of a human face; wherein the analyzing comprises analyzing one image of each pair of images and determining the angle of incidence based on luminance distribution and/or luminance histogram for the respective image, and the determining the angle of incidence comprises extracting objects in the respective image and determining the luminance distribution in the extracted objects.

7. A method for modifying pairs of images for improved three-dimensional displaying, the method comprising: analyzing the images for detecting an angle of incidence of illuminating light in the images; and modifying luminance values of at least a section of at least one of the images of each pair based on the angle of incidence and anatomical details of a human face, wherein the modifying the luminance values comprises darkening a left image under a line that divides the left image starting from a right top corner of the left image, an angle of the line being defined by the angle of incidence, or darkening a right image under a line that divides the right image starting from a left top corner of the right image, an angle of the line that divides the right image being defined by the angle of incidence.

8. A method for modifying pairs of images for improved three-dimensional displaying, the method comprising: analyzing the images for detecting an angle of incidence of illuminating light in the images; and modifying luminance values of at least a section of at least one of the images of each pair based on the angle of incidence and anatomical details of a human face, wherein the anatomical details of the human face refer to a human nose; wherein the modifying the luminance values comprises darkening a left image under a line that divides the left image starting from a right top corner of the left image, an angle of the line being defined by the angle of incidence, or darkening a right image under a line that divides the right image starting from a left top corner of the right image, an angle of the line that divides the right image being defined by the angle of incidence.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) For a more complete understanding of the present disclosure and advantages thereof, reference is now made to the following description taken in conjunction with the accompanying drawings. The disclosure is explained in more detail below using exemplary embodiments which are specified in the schematic figures of the drawings, in which:

(2) FIG. 1 shows a flow diagram of an embodiment of a method according to the present disclosure;

(3) FIG. 2 shows a block diagram of an embodiment of an apparatus according to the present disclosure;

(4) FIG. 3 shows an image with a scene for use with the present disclosure;

(5) FIG. 4 shows another image with a scene for use with the present disclosure; and

(6) FIG. 5 shows a section of a human face for use with the present disclosure.

(7) In the figures like reference signs denote like elements unless stated otherwise.

DETAILED DESCRIPTION

(8) For sake of clarity in the following description of the method-based FIG. 1 the reference signs used above in the description of FIGS. 3-5 will be maintained.

(9) FIG. 1 shows a flow diagram of a method for modifying pairs of images 101 for improved three-dimensional displaying.

(10) The method comprises analyzing S1 the images 523, 524 for detecting the angle of incidence of the illuminating light 211, 311, 525, 526, 527, 528 in the images 523, 524, and modifying S2 the luminance values of at least a section of at least one of the images 523, 524 of each pair based on the angle of incidence and anatomical details of the human face.

(11) The step of analyzing S1 may, e.g., comprise analyzing one image 523, 524 of each pair of images 523, 524 and determining the angle of incidence based on the luminance distribution and/or luminance histogram for the respective image 523, 524.

(12) Further, the method may comprise determining a main light source in the respective image 523, 524, wherein the angle of incidence is determined for the light 211, 311, 525, 526, 527, 528 of the main light source. Determining the angle of incidence may comprise extracting objects in the respective image 523, 524 and determining the luminance distribution in the extracted objects. In addition, determining the angle of incidence may comprise determining the luminance distribution in the respective image 523, 524 for the surrounding spaces of the extracted objects.

(13) Modifying S2 the luminance values may comprise only modifying the luminance values if the angle of incidence indicates a position of a light source of the incident light 211, 311, 525, 526, 527, 528 that is left or right of the image center. Further, Modifying S2 the luminance values may comprise darkening the left image 523 under a line that divides the image 523 starting from the right top corner of the image 523, the angle of the line being defined by the angle of incidence, or darkening the right image 524 under a line that divides the image 524 starting from the left top corner of the image 524, the angle of the line being defined by the angle of incidence.

(14) FIG. 2 shows a block diagram of an apparatus 100. The apparatus 100 comprises an image analyzer 102 that receives the pairs of images 101 and is coupled to an image processor 103.

(15) The image analyzer 102 analyzes the images and detects the angle of incidence of the illuminating light in the images. This information is then provided together with the images 101 to the image processor 103. The image processor 103 modifies the luminance values of at least a section of at least one of the images of each pair based on the angle of incidence and anatomical details of the human face and outputs the modified image(s).

(16) The image analyzer 102 may, e.g., analyze a single image of each pair of images and determine the angle of incidence based on the luminance distribution and/or luminance histogram for the respective image. Further, the image analyzer 102 may determine a main light source in the respective image and may determine the angle of incidence for the light of the determined main light source. The image analyzer 102 may extract objects in the respective image and determine the luminance distribution in the extracted objects to determine the angle of incidence. The image analyzer 102 may also determine the luminance distribution in the respective image for the surrounding spaces of the extracted objects to determine the angle of incidence.

(17) The image processor 103 may, e.g., only modify the luminance values if the angle of incidence indicates a position of a light source of the incident light that is left or right of the image center.

(18) The image processor 103 may darken the left image under a line that divides the image starting from the right top corner of the image, the angle of the line being defined by the angle of incidence, or darken the right image under a line that divides the image starting from the left top corner of the image, the angle of the line being defined by the angle of incidence.

(19) FIG. 3 shows an image with a scene showing a tree 210 with incident light 211 from the right. It can be seen, that darker areas or a shadow 212 is formed in the left sections of the tree 210 that are not directly illuminated by light 211.

(20) In FIG. 3 it can be seen how the angle of incidence of the light 211 may be determined based on detecting single objects in the scene. For example, the tree 210 may be extracted with adequate algorithms from the scene and then the luminance distribution in the extracted tree 210 may be analyzed.

(21) In the case of tree 210, it would be determined that the dark sections of the tree 210 are on the left of the tree and that therefore, the source of the light 211 must be on the right side of the tree.

(22) It is understood, that the extent of the shadow 212 may be used as a measure regarding the angle. The larger the shadow 212 is, the more to the back of the tree the light source of light 211 must be positioned. If the shadow 212 for example covers about half the tree, the source of the light 211 may be assumed to be in an angle of 0° to the tree 210. If no shadow would be visible on the tree 210, the source of the light 211 might be assumed to be positioned in an angle of 90° to the tree 210.

(23) FIG. 4 shows another image with a scene showing a tree 310. FIG. 4 serves for explaining how the surroundings of the detected objects may also serve to determine where the source of the incident light 311 is positioned. In FIG. 4 the shadow does not refer to a shadow on the tree 310 but to the shadow that is cast by the tree 310.

(24) If an object is detected, in this case tree 310, the surroundings of the object in the scene or image may be analyzed to detect a shadow 312 cast by the respective object. The position and size of the shadow 312 may then be used to determine the position of the source of light 311. It is especially possible to determine if the light source is parallel to the object or in front or behind the object.

(25) If the shadow 312 is oriented from the tree 310 or respective object to the lower edge of the scene or image, the source of the light 311 may be assumed to be behind the tree 310 or the respective object. If the shadow 312 is oriented from the tree 310 or respective object to the upper edge of the scene or image, the source of the light 311 may be assumed to be in front of the tree 310 or the respective object. If the shadow 312 is oriented from the tree 310 or respective object to the side, the source of the light 311 may be assumed to be in a line with the tree 310 or the respective object.

(26) FIG. 5 shows a section of a human face with a nose 520 sitting between a left eye 521 and a right eye 522. In front of each eye 521, 522 a corresponding image is shown. Further, incident light 525, 526, 527, 528 is shown. Incident light 525 originates at the left and compared to the image plane or the plane of the user's eyes has a rather small angle, incident light 525 also originates at the left and comprises a larger angle. Incident light 527 originates at the right and comprises the same angle as incident light 527 but from the other direction. Incident light 528 originates at the right and comprises the same angle as incident light 525 but from the other direction.

(27) It can be seen how in the images 523, 524, the sections of the images 523, 524 that are affected by the shadow of the nose 520 change according to the angle of incidence of the incident light 525, 526, 527, 528.

(28) It is understood, that the start of the line that defines the section that is to be darkened may also be provided on any point of the right edge in the left image 523 or the left edge in the right image 524, or on the top edge.

(29) The start of the line may, e.g., depend on the size of the nose. The method may therefore, e.g., provide a number of standard models that may be chosen by the user. For example, such standard models may refer to a small nose, a medium nose and a large nose.

(30) Exemplarily, with the small nose chosen, the start of the line may be low on the right edge or the left edge, as indicated above. With the medium nose chosen, the start of the line may be in the upper half of the respective edges of the images 523, 524. With the large nose chosen, the start of the line may be in the respective upper corner of the image.

(31) Although specific embodiments have been illustrated and described herein, it will be appreciated by those of ordinary skill in the art that a variety of alternate and/or equivalent implementations exist. It should be appreciated that the exemplary embodiment or exemplary embodiments are only examples, and are not intended to limit the scope, applicability, or configuration in any way. Rather, the foregoing summary and detailed description will provide those skilled in the art with a convenient road map for implementing at least one exemplary embodiment, it being understood that various changes may be made in the function and arrangement of elements described in an exemplary embodiment without departing from the scope as set forth in the appended claims and their legal equivalents. Generally, this application is intended to cover any adaptations or variations of the specific embodiments discussed herein.

(32) The present disclosure provides a method for modifying pairs of images 101 for improved three-dimensional displaying, the method comprising analyzing the images 523, 524 for detecting the angle of incidence of the illuminating light 211, 311, 525, 526, 527, 528 in the images 523, 524) and modifying the luminance values of at least a section of at least one of the images 523, 524 of each pair based on the angle of incidence and anatomical details of the human face. Further, the present disclosure provides a respective apparatus 100.

LIST OF REFERENCE SIGNS

(33) 100 apparatus 101 pairs of images 102 image analyzer 103 image processor 104 modified image 210, 310 tree 211, 311 incident light 212, 312 shadow 520 nose 521 left eye 522 right eye 523 left image 524 right image 525, 526, 527, 528 incident light S1, S2 method steps