Method, head-up display and output system for the perspective transformation and outputting of image content, and vehicle
10460428 ยท 2019-10-29
Assignee
Inventors
Cpc classification
G06T3/08
PHYSICS
B60K35/29
PERFORMING OPERATIONS; TRANSPORTING
B60K2360/186
PERFORMING OPERATIONS; TRANSPORTING
G02B2027/011
PHYSICS
G06T1/20
PHYSICS
G06T3/4038
PHYSICS
G09G2360/18
PHYSICS
H04N7/18
ELECTRICITY
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
International classification
H04N7/18
ELECTRICITY
G09G5/36
PHYSICS
G09G5/00
PHYSICS
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
G06T3/40
PHYSICS
Abstract
A method, a head-up display and a display system for the perspective transformation and displaying of rendered image content, as well as a corresponding vehicle, are provided. In the perspective transformation and outputting method, the image content to be displayed is subdivided into a plurality of tiles, and the individual tiles are each transformed in perspective using perspective transformation. The individual tiles that have been transformed in perspective are then combined to form a transformed image content, and the image content transformed in perspective is projected onto a projection area of the head-up display or displayed on a display unit.
Claims
1. A method for perspective transformation and display of a rendered image content by a head-up display, comprising the acts of: dividing the rendered image content into a plurality of tiles; transforming individual ones of the plurality of tiles by perspective transformation; storing the perspectively transformed tiles in a buffer memory; combining the perspectively transformed tiles into a transformed image content; and projecting the perspectively transformed image content onto a projection surface assigned to the head-up display, wherein only a portion of the individual ones of the plurality of tiles is perspectively transformed, the portion of the individual ones of the plurality of tiles is perspectively transformed includes individual ones of the plurality of tiles which are changed relative to tiles of a perspectively transformed image previously stored in the buffer memory, and during the act of combining the individual perspectively transformed tiles, the remaining individual ones of the plurality of tiles which were not changed relative to the perspectively transformed image previously stored in the buffer memory are retrieved from the buffer memory and combined with the portion the individual ones of the plurality of tiles perspectively transformed to form the perspectively transformed image content.
2. The method as claimed in claim 1, wherein the perspectively transformed image content is displayed on a display unit.
3. The method as claimed in claim 2, wherein the individual ones of the plurality of tiles are transformed such that the perspectively transformed image content is represented by the display unit in an uncompressed manner in at least one first region and in a compressed manner in at least one second region.
4. The method as claimed in claim 1, wherein during the transforming act, each of the individual ones of the plurality of tiles is perspectively transformed with regard to one or more of trapezoid shape, curvature, stretching, compression, rotation, and offset.
5. The method as claimed in claim 4, wherein adjoining tiles of the plurality of tiles have overlapping image contents.
6. The method as claimed in claim 1, wherein the image content is in the form of raster graphics, and individual pixels of the individual ones of the plurality of tiles are displaced by the perspective transformation.
7. The method as claimed in claim 6, wherein during the perspective transforming act, interpolation is performed between at least a portion of adjoining ones of the individual pixels of the individual ones of the plurality of tiles.
8. The method as claimed in claim 6, wherein during the perspective transforming act, the individual ones of the plurality of tiles are perspectively transformed by multiplication by a transformation matrix.
9. The method as claimed in claim 8, wherein the transformation matrix is chosen from a multiplicity of transformation matrices.
10. The method as claimed claim 1, wherein during the perspective transforming act, the perspective transformation of the individual ones of the plurality of tiles is carried out by a graphics module via a graphics programming interface.
11. The method as claimed in claim 1, wherein the portion of the individual ones of the plurality of tiles perspectively transformed include the individual ones of the plurality of tiles having elements of the image content changed with respect to image content previously stored in the buffer.
12. The method as claimed in claim 1, wherein different portions of the plurality of tiles have different color depths.
13. The method as claimed in claim 1, wherein during the perspective transformation act, the plurality of tiles are mirrored by a point or line mirroring, and during the combining act are combined to form a perspectively transformed, mirrored image content.
14. A head-up display for the perspective transformation and display of a rendered image content, comprising: a control unit including a first module configured to divide the rendered image content into a plurality of tiles and to store the perspectively transformed tiles in a buffer memory, a second module configured to transform individual ones of the plurality of tiles by perspective transformation, and a third module configured to combine the individual perspectively transformed tiles into a transformed image content; and a projection unit configured to project the perspectively transformed image content onto a projection surface assigned to the head-up display, wherein the second module is configured to perspectively transform only a portion of the individual ones of the plurality of tiles, the portion of the individual ones of the plurality of tiles includes individual ones of the plurality of tiles which are changed relative to tiles of a perspectively transformed image previously stored in the buffer memory, and the third module is configured to, during the combining of the individual perspectively transformed tiles, retrieve from the buffer memory previously stored remaining individual ones of the plurality of tiles which were not changed relative to the perspectively transformed image, and combine previously stored remaining individual ones of the plurality of tiles which were not changed relative to the perspectively transformed image with the individual ones of the plurality of tiles perspectively transformed to form the perspectively transformed image content.
15. A vehicle, comprising: a vehicle front window; and a head-up display including a control unit including a first module configured to divide the rendered image content into a plurality of tiles and to store the perspectively transformed tiles in a buffer memory, a second module configured to transform individual ones of the plurality of tiles by perspective transformation, and a third module configured to combine the individual perspectively transformed tiles into a transformed image content; and a projection unit configured to project the perspectively transformed image content onto the vehicle front window, wherein the second module is configured to perspectively transform only a portion of the individual ones of the plurality of tiles, and the portion of the individual ones of the plurality of tiles includes individual ones of the plurality of tiles which are changed relative to tiles of a perspectively transformed image previously stored in the buffer memory, and the third module is configured to, during the combining of the individual perspectively transformed tiles, retrieve from the buffer memory previously stored remaining individual ones of the plurality of tiles which were not changed relative to the perspectively transformed image, and combine previously stored remaining individual ones of the plurality of tiles which were not changed relative to the perspectively transformed image with the individual ones of the plurality of tiles perspectively transformed to form the perspectively transformed image content.
16. A vehicle, comprising: a control unit including a first module configured to divide the rendered image content into a plurality of tiles and to store the perspectively transformed tiles in a buffer memory, a second module configured to transform individual ones of the plurality of tiles by perspective transformation, and a third module configured to combine the individual perspectively transformed tiles into a transformed image content; and an output unit configured to output the perspectively transformed image content on a display unit in the vehicle, wherein the second module is configured to perspectively transform only a portion of the individual ones of the plurality of tiles, and the portion of the individual ones of the plurality of tiles includes individual ones of the plurality of tiles which are changed relative to tiles of a perspectively transformed image previously stored in the buffer memory, and the third module is configured to, during the combining of the individual perspectively transformed tiles, retrieve from the buffer memory previously stored remaining individual ones of the plurality of tiles which were not changed relative to the perspectively transformed image, and combine previously stored remaining individual ones of the plurality of tiles which were not changed relative to the perspectively transformed image with the individual ones of the plurality of tiles perspectively transformed to form the perspectively transformed image content.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION OF THE DRAWINGS
(9)
(10)
(11) The rendered image content 1 is divided into a plurality of regions, in particular tiles 2, preferably rectangular tiles 2, in method step 101. With further preference, the tiles 2 are square. The edges of the tiles 2 are marked by horizontal and vertical lines 11 in the exemplary embodiment shown, for the sake of better clarity.
(12) Preferably, individual tiles 2 enclose individual elements 7, 8, 9, 10 or portions of the elements 7, 8, 9, 10, i.e. that portion of the entire image content 1 which is contained in the individual tiles 2 represents individual elements 7, 8, 9, 10 or portions of the elements 7, 8, 9, 10.
(13) Preferably, different tiles 2 have different color depths, in particular 32, 24, 16, 8 or 4 bits. With further preference, different tiles can also have a color depth that lies between these values. With further preference, different tiles can also have a color depth that is greater than 32 bits or less than 4 bits. With further preference, the color depth of a tile 2 has a magnitude only such as is necessary for clear and distinct, i.e. true-color, representation of that portion of the image content 1 which is contained in the tile 2. In particular, tiles 2 that contain no elements 7, 8, 9, 10 of the image content 1 have a particularly low color depth, for example 4 bits or 1 bit. Preferably, tiles 2 that contain single-colored elements 7, 8, 9, 10 or single-colored portions of the elements 7, 8, 9, 10, in particular text characters or lettering and/or number characters, have a low color depth, in particular 16, 8 or 4 bits, or a value between these values. The memory space of the rendered image content 1 or of the individual tiles 2 can be significantly reduced as a result.
(14)
(15) Preferably, the perspective transformation 3 is implemented by a matrix multiplication, in particular the multiplication by a transformation matrix, for instance a 33 matrix, in method step 102. In this case, the individual pixels, in particular the four corner points, of the tile 2 which are indicated by support vectors, said tile preferably being present as raster graphics, are multiplied by a matrix that generates a perspective transformation. Support vectors are vectors that specify the position of the pixels with respect to an origin, in particular with respect to one of the four corners of the tile 2 or of the image composed of the tiles 2, i.e. the rendered or captured image content 1, 1, or the midpoint of the tile 2 or of the image composed of the tiles 2, i.e. the rendered or captured image content 1, 1.
(16) As a result of the perspective transformation 3, in particular as a result of the multiplication of the support vectors by a transformation matrix, in the exemplary embodiment shown, the four corner points of the tile 2 are assigned a new position in method step 102, indicated by the four dashed arrows 12. In particular, magnitude, i.e. length, and direction, i.e. orientation, of the support vectors of the pixels to be perspectively transformed change in this case.
(17) If the perspective transformation 3 in method step 102 gives rise to gaps between the pixels of the perspectively transformed tile 4, said gaps are preferably closed by an interpolation, in particular by a linear interpolation, in method step 103. In particular, pixels which adjoin and/or lie or are arranged between pixels displaced by the perspective transformation 3 are assigned a value by the interpolation, such that the perspectively transformed tile 4 has a gap-free portion of the transformed image content 5, i.e. is representable without artifacts. Preferably, a smoothing of the perspectively transformed image content is carried out by the linear interpolation in method step 103, such that said image content is representable in a manner free of artifacts by a head-up display or a display unit.
(18)
(19) Individual tiles 2 are transformed by respectively different perspective transformations 3, in particular respectively different transformation matrices, i.e. modularly, during the perspective transformation 3 in method step 102 and are combined to form a perspectively transformed image content 5 in method step 104. In method step 105, the perspectively transformed image content 5 is preferably projected by a head-up display onto a projection surface assigned to the head-up display, or is displayed by a display unit, where it is visually perceived by a user, in particular the driver, as an undistorted image. In method step 105, however, the perspectively transformed image content 5 can also be output, in particular by an output unit, for example as data via a corresponding data interface. Preferably, the perspectively transformed image content 5 is made available to a driver assistance device in this case.
(20) As a result of the separate, i.e. modular, perspective transformation 3 in method step 102, each perspectively transformed tile 4 preferably acquires a different shape than adjoining perspectively transformed tiles 4. In particular, this results in a closed or continuous, preferably artifact-free, transformed image content 5 that can be represented or displayed as a closed or continuous, preferably artifact-free, image. Preferably, by means of the separate perspective transformations 3, adjoining tiles 2 are perspectively transformed in a manner such that their edges adjoin one another in a closed, i.e. flush, manner. With further preference, for each tile 2 a perspective transformation 3 is chosen from a multiplicity of perspective transformations 3, or the perspective transformation 3 is adapted, in particular separately, for each tile 2, such that after the perspective transformation 3 and combination with adjacent perspectively transformed tiles 4, a closed, artifact-free, perspectively transformed image content 5 results. In particular, the respective perspective transformations 3 are chosen or adapted for individual tiles 2 such that after combination to form the perspectively transformed image content 5, no edges, bends, discontinuities and/or noise occur(s) in the perspectively transformed image content 5 or the representable, perspectively transformed image.
(21) With further preference, the rendered or captured image content 1, 1 is divided into tiles 2 having overlapping edges in method step 101 before the perspective transformation 3, i.e. a portion of the rendered or captured image content 1, 1 is represented by more than one tile 2 or is contained in more than one tile 2. In particular, the edges of adjoining tiles 2 overlap. The overlapping edges, i.e. the overlapping region, can have different widths, in particular 1, 2, 3 or more pixels. With further preference, the width of the overlapping edges is chosen or adapted such that during the process of combining the perspectively transformed tiles 4 to form a perspectively transformed image content 5 in method step 104, each region of the perspectively transformed image that is representable or displayable and/or imageable by the perspectively transformed image content 5 is covered by at least one perspectively transformed tile 4. As a result, the edges of the perspectively transformed tiles 4 need not adjoin one another in a closed, i.e. flush, manner, with the result that there is greater freedom in the choice of the appropriate perspective transformations 3 for an individual tile 2, or the adaptation of the perspective transformation 3 to an individual tile 2. In particular, as a result, perspectively transformed tiles 4 have to adjoin one another only in a substantially closed, i.e. flush, manner.
(22) Preferably, during the process of combining to form a perspectively transformed image content 5 in method step 104, the overlapping region or the overlapping edges of the perspectively transformed tiles 4 is or are smoothed, in particular averaged or interpolated, in a manner such that a seamless transition between adjoining perspectively transformed tiles 4 results. Artifacts, in particular edges, bends, discontinuities and/or noise, in the combined perspectively transformed image content 5 are avoided particularly reliably as a result.
(23)
(24)
(25)
(26)
(27)
(28) Preferably, the perspective transformations 3 shown in
(29)
(30)
(31) Preferably, the perspective transformation 3 of an individual tile 2 is composed of a plurality of basic perspective transformations 3, the results of which are shown in
(32) In order to determine the manner in which a rendered or captured image content 1, 1 has to be perspectively transformed in method step 102 in order to be displayed to the driver of a vehicle as an undistorted image in method step 105, a regular, in particular conventional, grid 13 is distorted, such that the distortion corresponds to an adaptation to the beam path of the head-up display or the imaging properties of the display unit.
(33)
(34) Preferably, the distortion of the regular grid 13, i.e. the conversion into a distorted grid 14, is performed by a perspective transformation 3 or a combination of the basic perspective transformations 3, the results of which are shown in
(35) With further preference, the grid points 15 of the regular grid 13 indicate the position of the tiles 2. With further preference, the grid points 16 of the distorted grid 14 indicate the position of the perspectively transformed tiles 4. In particular, the grid points 15 of the regular grid 13 function as initial support points of the tiles 2, preferably as the midpoint thereof or one of their corner points. In particular, the grid points 16 of the distorted grid 14 function as target support points of the perspectively transformed tiles 4, preferably as the midpoint thereof or one of their corner points.
(36)
(37) Before the perspective transformation in method step 102, rendered image content would be represented without distortion for example within an initial region 19 on the display 6. In order to ensure a distortion-free visual perception by a user during an imaging of the rendered image content by a head-up display, however, the representation of the rendered image content on the display 6 has to be adapted to the beam path of the head-up display. The correspondingly perspectively transformed image content is then represented in a distorted manner within a target region 18 on the display 6, such that the distortion of the represented image content that is caused by the beam path of the head-up display is precisely compensated for.
(38) By contrast, before the perspective transformation in method step 102, image content captured in a sensor-based manner, for instance by means of one or more cameras having fish-eye lenses, would be represented in a distorted manner for example within the target region 18 on the display 6. In order to ensure a distortion-free visual perception by the user during an imaging of the captured image content by the display 6, however, the captured image content therefore has to be adapted to the imaging properties of the display 6, in particular with regard to the imaging properties of the capturing sensor device. The correspondingly perspectively transformed image content is then represented without distortion within the initial region 19 on the display 6.
(39) In the case of rendered image content, the grid points 16 of the distorted grid 14 illustrated in
(40) Correspondingly, in this case, the position of the grid points 15 of the regular grid 13 is displayed by the initial region 19, illustrated by a solid thin line. The initial region functions as envelope of the grid points 15 of the regular grid 13 or of the initial support points of the tiles 2. The initial region 19 is preferably smaller than the transformation region 17 since the position of individual tiles 2 can be displaced by the perspective transformation 3 toward the edge of the display 6 or toward the edge of the transformation region 17 or the lateral extent of individual tiles 2 can increase. This reliably ensures that rendered image content 1 divided into a plurality of tiles 2 does not project beyond the display 6 or the transformation region 17 of the display 6 after the perspective transformation 3 and combination of the individual tiles 2 to form a perspectively transformed image content 5.
(41) An image content (1) captured by a camera is illustrated in the left-hand section of
(42) For the undistorted representation of the captured image content (1) on a display, the captured image content (1) is divided into a plurality of regions, in particular tiles 2, by lines 11 depicted by way of example. In this case, the lines 11 define the edges of the tiles 2.
(43) In the present example, said regions, as illustrated, are not rectangular. In particular, the tiles 2 are not uniform, i.e. they have different shapes and/or sizes depending on their position within the captured image content (1).
(44) Each of the tiles 2 is perspectively transformed, in particular by a matrix multiplication, preferably using a 33 transformation matrix. Preferably, in this case, each of the tiles 2, depending on its position within the captured image content (1), is perspectively transformed with the aid of a transformation matrix from a group of different transformation matrices.
(45) Preferably, during the process of combining the perspectively transformed tiles 4, this results in a perspectively transformed image content 5 that corresponds to a rectified image of the image content (1) captured in a sensor-based manner. This is illustrated in the right-hand section of
(46) Alternatively, however, a partly distorted perspectively transformed image content 5 can also be generated. In particular, by way of example, edge regions of the perspectively transformed image content 5 could be compressed in order to bring about an increased information density, for example an extended field of view, in the edge region.
(47) The method according to the invention can be used, in particular, to fuse image contents 1 captured separately by a plurality of cameras. Here the plurality of image contents 1 are in each case divided into a plurality of regions, in particular tiles 2, and the perspective transformations for the individual regions 2 in the plurality of image contents 1 are chosen in such a way that the resulting perspectively transformed tiles 4 can be combined to form a single, contiguous perspectively transformed image content 5.
(48) The foregoing disclosure has been set forth merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to persons skilled in the art, the invention should be construed to include everything within the scope of the appended claims and equivalents thereof.
LIST OF REFERENCE SIGNS
(49) 1 Rendered image content 1 Captured image content 2 Tile 3 Perspective transformation 4 Perspectively transformed tile 5 Transformed image content 6 Display 7 Traffic regulations 8 Warning indication 9 Navigation information 10 Speed information 11 Tile edge 12 Arrow 13 Regular grid 14 Distorted grid 15 Grid point of the regular grid 16 Grid point of the distorted grid 17 Transformation region 18 Target region 19 Initial region 100 Method for the perspective transformation and outputting of a rendered or captured image content by a head-up display or an output unit 101 Dividing the rendered or captured image content into a plurality of tiles 102 Perspectively transforming a plurality of tiles 103 Interpolation 104 Combining perspectively transformed tiles to form a perspectively transformed image content 105 Projecting or outputting perspectively transformed image content onto a projection surface by means of a head-up display or by means of an output unit