Method and Head-Up Display for the Perspective Transformation and Displaying of Image Content, and Vehicle
20190005628 ยท 2019-01-03
Inventors
Cpc classification
G06T3/08
PHYSICS
B60K35/29
PERFORMING OPERATIONS; TRANSPORTING
B60K2360/186
PERFORMING OPERATIONS; TRANSPORTING
G02B2027/011
PHYSICS
G06T1/20
PHYSICS
G06T3/4038
PHYSICS
G09G2360/18
PHYSICS
H04N7/18
ELECTRICITY
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
International classification
G09G5/36
PHYSICS
G09G5/00
PHYSICS
Abstract
A method and a head-up display for the perspective transformation and displaying of rendered image content, as well as a corresponding vehicle, are provided. In the perspective transformation and displaying method, the image content to be displayed is subdivided into a plurality of tiles, and the individual tiles are each transformed in perspective using perspective transformation. The individual tiles that have been transformed in perspective are then combined to form a transformed image content, and the image content transformed in perspective is projected onto a projection area associated with the head-up display.
Claims
1. A method for perspective transformation and display of a rendered image content by a head-up display, comprising the acts of: dividing the rendered image content into a plurality of tiles; transforming individual ones of the plurality of tiles by a respective perspective transformation into individual perspectively transformed tiles; combining the individual perspectively transformed tiles into a perspectively transformed image content; and projecting the perspectively transformed image content onto a projection surface assigned to the head-up display.
2. The method as claimed in claim 1, wherein during the perspective transforming act each tile of individual ones of the plurality of tiles is perspectively transformed with regard to one or more of the following features: trapezoid shape, curvature, stretching, compression, rotation, and offset.
3. The method as claimed in claim 2, wherein adjoining tiles of the plurality of tiles have overlapping image contents.
4. The method as claimed in claim 2, wherein the rendered image content is in the form of raster graphics, and individual pixels of the individual ones of the plurality of tiles are displaced by the perspective transformation.
5. The method as claimed in claim 4, wherein during the perspective transforming act, interpolation is performed between at least a portion of adjoining ones of the individual pixels of the individual ones of the plurality of tiles.
6. The method as claimed in claim 5, wherein during the perspective transforming act, the individual ones of the plurality of tiles are perspectively transformed by multiplication by a transformation matrix.
7. The method as claimed in claim 6, wherein the transformation matrix is chosen from a plurality of predetermined transformation matrices.
8. The method as claimed in claim 2, wherein during the perspective transforming act, the perspective transformation of the individual ones of the plurality of tiles is carried out by a graphics module via a graphics programming interface.
9. The method as claimed in claim 2, further comprising the act of: storing the perspectively transformed tiles in a buffer memory.
10. The method as claimed in claim 9, wherein only a portion of the individual ones of the plurality of tiles is perspectively transformed, and during the act of combining the individual perspectively transformed tiles, perspectively transformed tiles corresponding to the remaining the individual ones of the plurality of tiles retrieved from the buffer memory are combined with the portion the individual ones of the plurality of tiles is perspectively transformed to form the perspectively transformed image content.
11. The method as claimed in claim 10, wherein the portion of the individual ones of the plurality of tiles perspectively transformed include the individual ones of the plurality of tiles having elements of the image content changed with respect to image content previously stored in the buffer.
12. The method as claimed in claim 1, wherein different portions of the plurality of tiles have different color depths.
13. The method as claimed in claim 1, wherein during the perspective transformation act, the plurality of tiles are mirrored by a point or line mirroring, and during the combining act are combined to form a perspectively transformed, mirrored image content.
14. A head-up display for the perspective transformation and display of a rendered image content, comprising: a control unit including a first module configured to divide the rendered image content into a plurality of tiles, a second module configured to transform individual ones of the plurality of tiles by perspective transformation, and a third module configured to combine the individual perspectively transformed tiles into a transformed image content; and a projection unit configured to project the perspectively transformed image content onto a projection surface assigned to the head-up display.
15. A vehicle, comprising: a vehicle front window; and a head-up display including a control unit including a first module configured to divide the rendered image content into a plurality of tiles, a second module configured to transform individual ones of the plurality of tiles by perspective transformation, and a third module configured to combine the individual perspectively transformed tiles into a transformed image content; and a projection unit configured to project the perspectively transformed image content onto the vehicle front window.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
DETAILED DESCRIPTION OF THE DRAWINGS
[0043]
[0044]
[0045] The rendered image content 1 is divided into a plurality of regions, in particular tiles 2, preferably rectangular tiles 2, in method step 101. With further preference, the tiles 2 are square. The edges of the tiles 2 are marked by horizontal and vertical lines 11 in the exemplary embodiment shown, for the sake of better clarity.
[0046] Preferably, individual tiles 2 enclose individual elements 7, 8, 9, 10 or portions of the elements 7, 8, 9, 10, i.e. that portion of the entire image content 1 which is contained in the individual tiles 2 represents individual elements 7, 8, 9, 10 or portions of the elements 7, 8, 9, 10.
[0047] Preferably, different tiles 2 have different color depths, in particular 32, 24, 16, 8 or 4 bits. With further preference, different tiles can also have a color depth that lies between these values. With further preference, different tiles can also have a color depth that is greater than 32 bits or less than 4 bits. With further preference, the color depth of a tile 2 has a magnitude only such as is necessary for clear and distinct, i.e. true-color, representation of that portion of the image content 1 which is contained in the tile 2. In particular, tiles 2 that contain no elements 7, 8, 9, 10 of the image content 1 have a particularly low color depth, for example 4 bits or 1 bit. Preferably, tiles 2 that contain single-colored elements 7, 8, 9, 10 or single-colored portions of the elements 7, 8, 9, 10, in particular text characters or lettering and/or number characters, have a low color depth, in particular 16, 8 or 4 bits, or a value between these values. The memory space of the rendered image content 1 or of the individual tiles 2 can be significantly reduced as a result.
[0048]
[0049] Preferably, the perspective transformation 3 is implemented by a matrix multiplication, in particular the multiplication by a transformation matrix, in method step 102. In this case, the individual pixels, in particular the four corner points, of the tile 2 which are indicated by support vectors, said tile preferably being present as raster graphics, are multiplied by a matrix that generates a perspective transformation. Support vectors are vectors that specify the position of the pixels with respect to an origin, in particular with respect to one of the four corners of the tile 2 or of the image composed of the tiles 2, i.e. the rendered image content 1 or the midpoint of the tile 2 or of the image composed of the tiles 2, i.e. the rendered image content 1.
[0050] As a result of the perspective transformation 3, in particular as a result of the multiplication of the support vectors by a transformation matrix, in the exemplary embodiment shown, the four corner points of the tile 2 are assigned a new positon in method step 102, indicated by the four dashed arrows 12. In particular, magnitude, i.e. length, and direction, i.e. orientation, of the support vectors of the pixels to be perspectively transformed change in this case.
[0051] If the perspective transformation 3 in method step 102 gives rise to gaps between the pixels of the perspectively transformed tile 4, said gaps are preferably closed by an interpolation, in particular by a linear interpolation, in method step 103. In particular, pixels which adjoin and/or lie or are arranged between pixels displaced by the perspective transformation 3 are assigned a value by the interpolation, such that the perspectively transformed tile 4 has a gap-free portion of the transformed image content 5, i.e. is representable without artifacts. Preferably, a smoothing of the perspectively transformed image content is carried out by the linear interpolation in method step 103, such that said image content is representable in a manner free of artifacts by a head-up display.
[0052]
[0053] Individual tiles 2 are transformed by respectively different perspective transformations 3, in particular respectively different transformation matrices, i.e. modularly, during the perspective transformation 3 in method step 102 and are combined to form a perspectively transformed image content 5 in method step 104. In method step 105, the perspectively transformed image content 5 is preferably projected by a head-up display onto a projection surface assigned to the head-up display, where it is visually perceived by a user, in particular the driver of the vehicle equipped with the head-up display, as an undistorted image.
[0054] As a result of the separate, i.e. modular, perspective transformation 3 in method step 102, each perspectively transformed tile 4 preferably acquires a different shape than adjoining perspectively transformed tiles 4. In particular, this results in a closed or continuous, preferably artifact-free, transformed image content 5 that can be represented or displayed as a closed or continuous, preferably artifact-free, image. Preferably, by means of the separate perspective transformations 3, adjoining tiles 2 are perspectively transformed in a manner such that their edges adjoin one another in a closed, i.e. flush, manner. With further preference, for each tile 2 a perspective transformation 3 is chosen from a multiplicity of perspective transformations 3, or the perspective transformation 3 is adapted, in particular separately, for each tile 2, such that after the perspective transformation 3 and combination with adjacent perspectively transformed tiles 4, a closed, artifact-free, perspectively transformed image content 5 results. In particular, the respective perspective transformations 3 are chosen or adapted for individual tiles 2 such that after combination to form the perspectively transformed image content 5, no edges, bends, discontinuities and/or noise occur(s) in the perspectively transformed image content 5 or the representable, perspectively transformed image.
[0055] With further preference, the rendered image content 1 is divided into tiles 2 having overlapping edges in method step 101 before the perspective transformation 3, i.e. a portion of the rendered image content 1 is represented by more than one tile 2 or is contained in more than one tile 2. In particular, the edges of adjoining tiles 2 overlap. The overlapping edges, i.e. the overlapping region, can have different widths, in particular 1, 2, 3 or more pixels. With further preference, the width of the overlapping edges is chosen or adapted such that during the process of combining the perspectively transformed tiles 4 to form a perspectively transformed image content 5 in method step 104, each region of the perspectively transformed image that is representable or displayable and/or imageable by the perspectively transformed image content 5 is covered by at least one perspectively transformed tile 4. As a result, the edges of the perspectively transformed tiles 4 need not adjoin one another in a closed, i.e. flush, manner, with the result that there is greater freedom in the choice of the appropriate perspective transformations 3 for an individual tile 2, or the adaptation of the perspective transformation 3 to an individual tile 2. In particular, as a result, perspectively transformed tiles 4 have to adjoin one another only in a substantially closed, i.e. flush, manner.
[0056] Preferably, during the process of combining to form a perspectively transformed image content 5 in method step 104, the overlapping region or the overlapping edges of the perspectively transformed tiles 4 is or are smoothed, in particular averaged or interpolated, in a manner such that a seamless transition between adjoining perspectively transformed tiles 4 results. Artifacts, in particular edges, bends, discontinuities and/or noise, in the combined perspectively transformed image content 5 are avoided particularly reliably as a result.
[0057]
[0058]
[0059]
[0060]
[0061]
[0062]
[0063]
[0064] Preferably, the perspective transformation 3 of an individual tile 2 is composed of a plurality of basic perspective transformations 3, the results of which are shown in
[0065] In order to determine the manner in which a rendered image content 1 has to be perspectively transformed in method step 102 in order to be displayed to the driver of a vehicle as an undistorted image in method step 105, i.e. the manner in which a rendered image content 1 has to be adapted to the beam path of the head-up display, i.e. from the image-generating unit of the head-up display via the projection surface to the driver's eye, a regular, in particular conventional, grid 13 is distorted, such that the distortion corresponds to the adaptation to the beam path.
[0066]
[0067] Preferably, the distortion of the regular grid 13, i.e. the conversion into a distorted grid 14, is performed by a perspective transformation 3 or a combination of the basic perspective transformations 3, the results of which are shown in
[0068] With further preference, the grid points 15 of the regular grid 13 indicate the position of the tiles 2. With further preference, the grid points 16 of the distorted grid 14 indicate the position of the perspectively transformed tiles 4. In particular, the grid points 15 of the regular grid 13 function as initial support points of the tiles 2, preferably as the midpoint thereof or one of their corner points. In particular, the grid points 16 of the distorted grid 14 function as target support points of the perspectively transformed tiles 4, preferably as the midpoint thereof or one of their corner points.
[0069]
[0070] The position of the grid points 16 of the distorted grid 14 preferably lies within a target region 18, illustrated by a thin dotted line, which in particular is part of the transformation region 17. The target region 18 functions as envelope of the grid points 16 of the distorted grid or of the target support points of the perspectively transformed tiles 4. The target region 18 is preferably smaller than the transformation region 17 since the perspectively transformed tiles 4 have a lateral extent, and this prevents perspectively transformed tiles 4 and thus parts of the perspectively transformed image content 5 from projecting beyond the transformation region 17.
[0071] The position of the grid points 15 of the regular grid 13 is displayed by the initial region 19, illustrated by a solid thin line. The initial region functions as envelope of the grid points 15 of the regular grid 13 or of the initial support points of the tiles 2. The initial region 19 is preferably smaller than the transformation region 17 since the position of individual tiles 2 can be displaced by the perspective transformation 3 toward the edge of the display 6 or toward the edge of the transformation region 17 or the lateral extent of individual tiles 2 can increase. This reliably ensures that rendered image content 1 divided into a plurality of tiles 2 does not project beyond the display 6 or the transformation region 17 of the display 6 after the perspective transformation 3 and combination of the individual tiles 2 to form a perspectively transformed image content 5.
[0072] The foregoing disclosure has been set forth merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to persons skilled in the art, the invention should be construed to include everything within the scope of the appended claims and equivalents thereof.
LIST OF REFERENCE SIGNS WHERE APPROPRIATE
[0073] 1 Rendered image content [0074] 2 Tile [0075] 3 Perspective transformation [0076] 4 Perspectively transformed tile [0077] 5 Transformed image content [0078] 6 Display [0079] 7 Traffic regulations [0080] 8 Warning indication [0081] 9 Navigation information [0082] 10 Speed information [0083] 11 Tile edge [0084] 12 Arrow [0085] 13 Regular grid [0086] 14 Distorted grid [0087] 15 Grid point of the regular grid [0088] 16 Grid point of the distorted grid [0089] 17 Transformation region [0090] 18 Target region [0091] 19 Initial region [0092] 100 Method for the perspective transformation and display of a rendered image content by a head-up display [0093] 101 Dividing the rendered image content into a plurality of tiles [0094] 102 Perspectively transforming a plurality of tiles [0095] 103 Interpolation [0096] 104 Combining perspectively transformed tiles to form a perspectively transformed image content [0097] 105 Projecting perspectively transformed image content onto a projection surface by means of a head-up display