Overscan for 3D display
11496724 · 2022-11-08
Assignee
Inventors
Cpc classification
H04N13/302
ELECTRICITY
H04N13/111
ELECTRICITY
H04N13/122
ELECTRICITY
International classification
H04N13/305
ELECTRICITY
H04N13/122
ELECTRICITY
Abstract
A display processor and computer-implemented method are provided for processing three-dimensional [3D] image data for display on a 3D display. The 3D display is arranged for emitting a series of views of the 3D image data which enables stereoscopic viewing of the 3D image data at multiple viewing positions. The series of views may be displayed on the 3D display using overscan. The degree of overscan may be determined as a function of one or more depth range parameters, the one or more depth range parameters characterizing, at least in part, a degree of depth perceived by a viewer when the series of views is displayed on the 3D display.
Claims
1. A display processor for processing three-dimensional [3D] image data for display on a 3D display, the 3D display being arranged for adjacently emitting a series of views of the 3D image data comprising two-dimensional [2D] image data and depth-related data, the series of views enabling stereoscopic viewing of the 3D image data at multiple viewing positions, wherein the display processor is configured to: generate the series of views of the 3D image data; use overscan for displaying the 3D image data on the 3D display so as to reduce or avoid de-occlusion artifacts at the bezels of the 3D display; and determine a degree of the overscan as a function of one or more depth range parameters, the degree of the overscan providing a degree of cropping and scaling to be applied to the 2D image data, and the one or more depth range parameters characterizing, at least in part, a range of depth perceived by a viewer when the series of views is displayed on the 3D display, wherein the one or more depth range parameters comprise one or more mapping parameters defining a mapping to be applied to values of the depth-related data when generating the series of view of the 3D image data, and the degree of overscan is determined based on said one or more mapping parameters.
2. The display processor according to claim 1, wherein the depth-related data includes depth-related values mapped to parallax shift values by which image data of the 2D image data is locally displaced across the series of views.
3. The display processor according to claim 1, wherein the mapping comprises a gain parameter and an offset parameter.
4. The display processor according to claim 3, wherein the display processor is configured to determine the degree of overscan as a function of a multiplicative product of a nominal overscan value and the gain parameter.
5. The display processor according to claim 4, wherein the display processor is configured to determine the degree of overscan as a sum of said multiplicative product and an absolute value of the offset parameter.
6. The display processor according to claim 1, wherein the one or more depth range parameters comprise one or more content parameters which are indicative of a depth range of the content of the 3D image data.
7. The display processor according to claim 6, wherein the one or more content parameters represent a measurement of the depth range of the content of the 3D image data.
8. The display processor according to claim 7, wherein the one or more content parameters are indicative of the depth range within an image and/or, if the 3D image data represents a 3D video, the depth range over multiple images.
9. The display processor according to claim 6, wherein the one or more content parameters are indicative of the depth range within a video shot.
10. A 3D display comprising the display processor according to claim 1.
11. A non-transitory computer readable medium comprising 3D image data and metadata associated with the 3D image data, the metadata representing the one or more content parameters as defined by claim 6.
12. A computer-implemented method of processing three-dimensional [3D] image data for display on a 3D display, the 3D display being arranged for adjacently emitting a series of views of the 3D image data comprising two-dimensional [2D] image data and depth-related data, the series of views enabling stereoscopic viewing of the 3D image data at multiple viewing positions, wherein the method comprises: generating the series of views of the 3D image data; using overscan for displaying the 3D image data on the 3D display so as to reduce or avoid de-occlusion artifacts at the bezels of the 3D display; and determining a degree of the overscan as a function of one or more depth range parameters, the degree of the overscan providing a degree of cropping and scaling to be applied to the 2D image data, and the one or more depth range parameters characterizing, at least in part, a range of depth perceived by a viewer when the series of views is displayed on the 3D display, wherein the one or more depth range parameters comprise one or more mapping parameters defining a mapping to be applied to values of the depth-related data when generating the series of views of the 3D image data, and the degree of overscan is determined based on said one or more mapping parameters.
13. A non-transitory computer readable medium comprising data representing instructions arranged to cause a processor system to perform the method according to claim 12.
14. The display processor according to claim 1, wherein the display processor is configured to determine the degree of overscan as a function of a multiplicative product of a nominal overscan value and a gain parameter controlling a magnitude of depth differences within the 3D image data.
15. An apparatus comprising: an autostereoscopic three-dimensional [3D] display configured to adjacently emit a series of views of 3D image data, the series of views enabling stereoscopic viewing of the 3D image data at multiple viewing positions; and a processor operationally coupled to the display and configured to: generate the series of views of the 3D image data; use overscan for displaying the 3D image data on the 3D display so as to reduce or avoid de-occlusion artifacts at the bezels of the 3D display; and determine a degree of the overscan as a function of a multiplicative product of a nominal overscan value and a gain parameter controlling a magnitude of depth differences within the 3D image data.
16. The display processor according to claim 1, wherein the mapping includes a gain parameter and an offset parameter which are applied to a depth value when mapping the depth value to a parallax shift value during rendering of the 3D image data.
17. The display processor according to claim 1, wherein the degree of the overscan for displaying the 3D image data on the 3D display is changed dynamically so that the degree of overscan is increased with increase in the range of depth and decreased with a decrease in the range of depth.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) These and other aspects of the invention are apparent from and will be elucidated with reference to the embodiments described hereinafter. In the drawings,
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10) It should be noted that items which have the same reference numbers in different Figures, have the same structural features and the same functions, or are the same signals. Where the function and/or structure of such an item has been explained, there is no necessity for repeated explanation thereof in the detailed description.
LIST OF REFERENCE AND ABBREVIATIONS
(11) The following list of references and abbreviations is provided for facilitating the interpretation of the drawings and shall not be construed as limiting the claims. 0-5 series of views 100 series of repeated viewing cones 102-106 viewing cones 110 first viewing position 112 second viewing position 120 display processor 122 data representing series of images 140 3D display 142 light generating portion 144 optical means 200 2D image 210 depth map 220 view at second viewing position without overscan 222 de-occlusion artifacts 224 measure of degree of overscan 230 view at second viewing position with overscan 300 method for processing 3D image data 310 determining degree of overscan 320 generating series of view 350 computer readable medium 360 non-transitory data representing instructions
DETAILED DESCRIPTION OF EMBODIMENTS
(12)
(13) The 3D display 140 further comprises optical means 144 for redirecting light generated by the light generating portion 142 into different directions. The light generating portion 142 may be suitably arranged and cooperative with the optical means 144 such that a series of views 0-5 are emitted from the 3D display 140 in the form of a viewing cone 104. Moreover, the 3D display 140 may be arranged for, when being provided with a series of images 122, adjacently emitting said images in the series of views 0-5. Thus, the viewer will perceive, when viewing one of the series of views 0-5, a respective one of the series of images 122. The series of images 122 may correspond to a camera facing a scene comprised in 3D image data and moving from left to right in front of, and relative to, said scene. Hence, a viewer positioned at viewing position 110 within the viewing cone 104 may perceive two different ones 2, 3 of the series of views 0-5 and thereby may obtain stereoscopic viewing of said scene.
(14) It is noted that 3D displays of the above configuration, and the manner of processing a series of images 122 for display as the series of views 104, are in itself known. For example, U.S. Pat. No. 6,064,424 discloses an autostereoscopic display apparatus having lenticular elements as optical means 144 and discusses the relationship between display elements and the lenticular elements. Also, autostereoscopic displays are known which comprise so-termed parallax barriers as optical means 144.
(15)
(16)
(17) Effectively, the 3D display may appear to the viewer to be a window behind which the scene of
(18) In this respect, it is noted that in the above and following, the term ‘depth map’ refers to depth data which is arranged in rows and columns. Moreover, the adjective ‘depth’ is to be understood as being indicative of the depth of portions of an image to the camera. Therefore, the depth map may be constituted by depth values, but also by, e.g., disparity values or parallactic shift values. Essentially, the depth map may therefore constitute a disparity map or a parallactic shift map. Here, the term disparity refers to a difference in position of an object when perceived with a left eye or a right eye of the user. The term parallactic shift refers to a displacement of the object between two views so as to provide said disparity to the user. Disparity and parallactic shift are generally negatively correlated with distance or depth. Device and methods for conversion between all of the above types of maps and/or values are known.
(19)
(20)
(21) In this respect, it is noted that
(22) This degree of overscan may be determined in various ways. A first example is to analyze the depth range within of the content itself, and determine how much overscan is needed to render the content with enough ‘look-around’ image data remaining at the borders of the image. For example, the absolute depth and depth variation of the content at the image borders may be analyzed. Such analysis may be performed by the display processor, but also by a third party, e.g., by a content author or content provider. The latter may analyze the content in an offline manner, e.g., by analyzing whole temporal fragments such as video shots, and then determining the necessary overscan per temporal fragment. This may ensure temporal stability compared to a dynamic variation of the degree of overscan per image (video frame). A parameter representing a determined amount of overscan may be transmitted as meta-data along with the content, e.g., the 3D image data. Additionally or alternatively, the depth range of a video shot may be transmitted at the start of the video shot.
(23) Additionally or alternatively to the above-described determination of the degree of overscan, said overscan may also made dependent on mapping parameters used in autostereoscopic displays which indicate the amount and forward/backward positioning of depth. This amount and positioning may in a 3D display be controlled by user using a ‘factor’ and ‘offset’ control function, with the ‘factor’ representing a gain factor. Such controls typically have a direct effect on the amount of disparity which is presented on the display and therefore on the degree of overscan which is needed.
(24) For example, the degree of overscan may be determined as a nominal overscan which represents an (optimal) trade-off between amount of distortion due to stretching and the degree of de-occlusion at the bezels of the display, e.g., for average content at default factor (e.g., 100%, corresponding to a gain of 1.0) and default offset (e.g., 0, which may be defined relative to a ‘neutral’ display depth at display plane). The actual overscan may then be based on the nominal overscan adjusted to the current settings of the factor and offset, e.g., as selected by the user or automatically.
(25) For example, in the extreme case of the factor being 0 and the offset being 0, the scene becomes flat and is displayed at the display plane, and no overscan is needed. However, if the factor is doubled to 200%, then any scene behind the display may need twice as much overscan. As such, the nominal overscan n may be multiplied by the factor f (assumed to be normalized) to arrive at the actual overscan a:
a=f*n
(26) The factor f may be a combination of a user-controlled factor f.sub.u (which may have a range which may be suitably selected for the user, e.g., with 100% corresponding to the nominal depth a given display may show) and a display-specific factor f.sub.s which for a specific type of display may determine how the user setting is scaled to the nominal depth. The latter setting may be used to ensure that for different types of displays, which may need a different amount of disparity (in terms of number of pixels of parallax shift) to be generated (e.g., more or fewer pixels of disparity depending on the resolution, or the DPI, of the display), the user setting 100% may provide an adequate amount of depth. In some embodiments, f.sub.s may already be taken into account in the nominal overscan, since both are one-time settings relating to the specifics of the display and the trade-offs made for optimal performance. In other embodiments, f.sub.s may not be taken into account in the nominal overscan, but a change in f.sub.s with respect to a nominal f.sub.s.sup.n may be taken into account as follows:
a=f.sub.u*f.sub.s/f.sub.s.sup.n*n
Next to the factor, another mapping parameter may be the offset o which effectively pulls the scene forward or pushes the scene backward with respect to a neutral display depth. Assuming that the depth range, at least in terms of the amount of disparity generated in front of the display (negative disparity) or behind the display (positive disparity), is symmetrical with respect to the display plane, applying an offset may increase the maximum disparity that can be generated, irrespective of whether the offset is positive or negative. Accordingly, the absolute value of the offset may be added to the above term when determining the degree of overscan:
a=f.sub.u*f.sub.s/f.sub.s.sup.n*n+|o|
Here, o may be scaled so that an absolute value of 1 corresponds to the maximum disparity magnitude for the nominal factor. Depending on the order in which the factor and offset are applied to the depth values, one may also use:
a=f.sub.u*f.sub.s/f.sub.sn*(n+|o|)
(27) Yet another option, which again assumes that the offset is normalized, is to also apply the offset via a multiplication factor:
a=f.sub.u*f.sub.s/f.sub.s.sup.n*(1+|o|)*n
(28) Alternatively, only an offset causing image content to be positioned behind the display plane may be considered, e.g., by not taking the absolute value of the offset but by rather clipping the offset to a range by which the scene is moved backward, and thereby using zero instead of the absolute value of the offset for offsets that pull the scene closer to the viewer. However, even though it is not a de-occlusion per se, also content placed in front of the display plane may provide a need for fill-in, as more of the foreground object should become visible when looking from the side (even though this constitutes a window violation). As such, it may be preferred to use the absolute value of the offset rather than using said clipping.
(29) As indicated earlier, metadata may be provided for the 3D image data, which may indicate a content-dependent scaling factor for the overscan, e.g., to enable a content author or content provider to influence the amount of overscan during display. If such metadata is available to the display processor, the content-dependent scaling factor may be used as a(nother) scaling factor for the nominal overscan. Additionally or alternatively, metadata may be provided which indicates a depth range of the content. This metadata may be used to refine the scaling of the nominal overscan. For example, a video shot that has a small depth range may not need a significant amount of overscan, even if the factor or offset are high, and conversely, content that has a very large depth range may need a large overscan even when the factor or offset are set to nominal. Given a d.sup.− and d.sup.+ (minimum and maximum depth range, expressed as disparity values), the amount of depth relative to the current offset may be computed as max(|d.sup.+−o|, |o−d.sup.−|), where in this case, the offset may be still in the same range as the depth, which may be normalized (as above) if the depth is also normalized and centered around screen depth. The ratio of this number compared to a nominal amount of depth d.sup.n for which the nominal overscan was determined may be used to compute the scaled actual overscan:
a=f.sub.u*f.sub.s/f.sub.s.sup.n*max(|d.sup.+−o|,|o−d.sup.−|)/d.sup.n*n.
(30) If meta-data is available indicating the depth range, one may assume d.sup.+=d.sup.n and d.sup.−=−d.sup.n, in which case the above formula reverts to the previous version with the offset accounted for as multiplication factor, assuming offset o is normalized with respect to the nominal depth range, and interpreted such that the value zero corresponds to a depth corresponding to the display plane for the values of d.sup.−, d.sup.+ and d.sup.n as well. Note that there are variations of the above formula, e.g., which may take into account that neither the depth values nor the offset may be centered around 0. In general, the above formula assumes that the depth values already represent disparity/parallax. If the depth values rather represent the distance from the viewer, the formulas should be modified to take into account the 1/x relation between distance and disparity. Such conversion is known per se in the field of 3D displays and processing.
(31) It is noted that, in general, the overscan may be applied in a manner in which the aspect ratio of the content is preserved, e.g., equally in horizontal and vertical direction. Alternatively, the overscan may only be applied horizontally or vertically, which may (slightly) modify the aspect ratio of the content. In general, the overscan may be applied equally to either side of the content along each respective direction, e.g., to the left and the right in the horizontal direction and to the top and the bottom in the vertical direction, but also to selected ones from the four sides (left, right, top, bottom), and/or in an unequal manner to different ones of the four sides. If a parameter representing a determined amount of overscan is made available, e.g., as metadata, the parameter may define the overscan in accordance with the above.
(32) The overscan may be applied by cropping the side(s) of the image data of the generated views, and scaling the cropped image data to the desired dimensions, e.g., of the image data before cropping. Effectively, the overscan may be applied to the generated views. Alternatively, the overscan may be partly integrated into the view rendering or view synthesis. For example, the view rendering or view synthesis may be configured to generate an up-scaled view which is then cropped afterwards. The scaling may thus be performed by the view rendering or synthesis. Moreover, instead of explicitly cropping image data of the views, the view rendering or view synthesis may be configured to omit generating such otherwise cropped image data. In general, any scaling for overscan may be performed before, during or after view rendering. Any scaling for overscan may be combined with one or more other scaling steps. It will be appreciated that various other ways of applying overscan are equally conceivable.
(33)
(34)
(35) The method 300 may be implemented on a processor system, e.g., on a computer as a computer implemented method, as dedicated hardware, or as a combination of both. As also illustrated in
(36) It should be noted that the above-mentioned embodiments illustrate rather than limit the invention, and that those skilled in the art will be able to design many alternative embodiments.
(37) In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. Use of the verb “comprise” and its conjugations does not exclude the presence of elements or steps other than those stated in a claim. The article “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. The invention may be implemented by means of hardware comprising several distinct elements, and by means of a suitably programmed computer. In the device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.