METHOD FOR GENERATING A PERSPECTIVE-CORRECTED AND/OR TRIMMED OVERLAY FOR AN IMAGING SYSTEM OF A MOTOR VEHICLE
20220262127 · 2022-08-18
Inventors
Cpc classification
B60W50/14
PERFORMING OPERATIONS; TRANSPORTING
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
G06V20/58
PHYSICS
B62D15/0295
PERFORMING OPERATIONS; TRANSPORTING
B60W2552/15
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/8086
PERFORMING OPERATIONS; TRANSPORTING
International classification
G06V20/58
PHYSICS
Abstract
The present invention relates to a computer-implemented method for generating a perspective-corrected overlay for an imaging system of a motor vehicle, to a method for generating a trimmed overlay for an imaging system of a motor vehicle, to devices for carrying out respective methods and to motor vehicles comprising an imaging system and such a device.
Claims
1-21. (canceled)
22. A computer-implemented method for generating a perspective-corrected overlay or trimmed overlay for a 2D image representing an environment of a vehicle for an imaging system of the vehicle, comprising: receiving 3D data of at least one part of the vehicle's environment represented in the 2D image; determining, based at least in part on a steering angle of the vehicle, a predicted path of travel of the vehicle's wheels which when displayed as an overlay in the 2D image forms together with the 2D image a combined 2D image; obtaining, based at least in part on the predicted path of travel, an adapted path of travel which corresponds to a perspective-corrected sub-section or trimmed sub-section of the predicted path of travel and which when displayed as the overlay in the 2D image appears to follow a surface topography of the environment in the 2D image and appears to terminate at an obstacle representing at least one boundary that is impassable for the vehicle.
23. The computer-implemented method of claim 22, wherein obtaining the adapted path of travel further comprises: fragmenting the perspective-corrected or trimmed sub-section of the predicted path of travel into at least two fragments; and determining the adapted path of travel based at least in part on the 3D data associated via the 2D image with at least one fragment, wherein the step of fragmenting comprises dividing the perspective-corrected or trimmed sub-section of the predicted path of travel into at least two fragments being equally-distributed across or along the predicted path of travel and being rectangular-shaped.
24. The computer-implemented method of claim 23, wherein determining the adapted path of travel comprises: generating the combined 2D image by combining the 2D image and the predicted path of travel; and determining, for each fragment, based at least in part on the combined 2D image, a collection of 3D data corresponding to a part of the environment represented in the combined 2D image that is enclosed by boundaries of the fragment.
25. The computer-implemented method of claim 24, wherein determining the adapted path of travel comprises: determining, for each fragment, based at least on the collection of 3D data, an averaged value of a certain property of a part of the environment corresponding to the collection of 3D data of that fragment; and adapting, for each fragment, a shape and location of the fragment in a coordinate system of the 2D image and of the combined 2D image, based at least in part on the averaged value, for creating a perspective-corrected appearance of the fragment when displayed as an overlay in the 2D image.
26. The computer-implemented method of claim 25, wherein determining the adapted path of travel comprises: adapting, for each fragment, a hue of a color of the fragment based on (i) the averaged value, (ii) the location of the fragment within the adapted path of travel, and (iii) based on a distance between the fragment and the vehicle in the 2D image and in the combined 2D image; and repeating the adapting steps for each fragment unless all fragments have been adapted so that the adapted path of travel is obtained.
27. The computer-implemented method of claim 25, wherein determining the adapted path of travel further comprises: determining, for each fragment, a normal vector associated with the part of the environment corresponding to the collection of 3D data of that fragment, based on the collection of 3D data and the averaged value of that fragment, and calculating an angle between the normal vector and a reference vector, the reference vector pointing in a direction corresponding to a light ray emanating from a light source, wherein (i) the light source is a virtual light source, (ii) the light ray emanating from the light source is a directional light ray, (iii) the light source has a direction, (iv) the light source has a position above a scene shown in the 2D image, or (v) the light ray has a direction aligned to a sunlight direction at a time of processing.
28. The computer-implemented method according to claim 25, wherein determining the adapted path of travel further comprises: adapting, for each fragment, a brightness of a color of the fragment based at least in part on the averaged value and within a range bounded by a minimum brightness value and a maximum brightness value.
29. The computer-implemented method of claim 22, wherein obtaining the adapted path of travel further comprises, determining a start point of the perspective-corrected sub-section or trimmed sub-section of the predicted path of travel close to the vehicle and an end point of the perspective-corrected sub-section or trimmed sub-section of the predicted path of travel distant to the vehicle, based at least on the 3D data, the predicted path of travel and auxiliary data related to the environment, wherein (a) the start point of the perspective-corrected sub-section or trimmed sub-section of the predicted path of travel corresponds to the start point of the predicted path of travel, (b) the 3D data and the auxiliary data indicates obstacles in the environment intersecting with the predicted path of travel, (c) the end point of the perspective-corrected sub-section or trimmed sub-section of the predicted path of travel is determined based on a location of a first obstacle along the predicted path of travel from near to distant intersecting with the predicted path of travel at the location of the first obstacle intersecting with the predicted path of travel, (d) an obstacle is identified as intersecting with the predicted path of travel if the obstacle has at least one expansion, at least one height, at least one orientation or at least one location exceeding at least one predefined threshold value, and (e) the ground's slope, the angle of driving slope and/or the vehicle's ground clearance is taken into account for identifying an intersecting obstacle.
30. The computer-implemented method of claim 22, wherein obtaining the adapted path of travel further comprises the step of adapting a determined sub-section of the predicted path of travel based on object or scene classification relying on the 2D image data, the 3D data and/or the auxiliary data.
31. The computer-implemented method of claim 25, wherein the perspective-corrected sub-section or trimmed sub-section of the predicted path of travel is identical to the entire predicted path of travel; and the certain property of the part of the environment corresponding to the collection of 3D data comprises a slope with respect to a reference slope, an orientation with respect to a reference orientation, a height with respect to a reference height, a location with respect to a reference location, and/or an expansion of the part of the environment.
32. The computer-implemented method of claim 25, wherein obtaining the adapted path of travel further comprises: determining a start point of the perspective-corrected sub-section or trimmed sub-section of the predicted path of travel close to the vehicle and an end point of the perspective-corrected sub-section or trimmed sub-section of the predicted path of travel distant to the vehicle, based at least on the 3D data and/or the predicted path of travel; wherein the 3D data indicates obstacles in the environment intersecting with the predicted path of travel and the end-point of the perspective-corrected sub-section or trimmed sub-section of the predicted path of travel is determined based on a location of a first obstacle along the predicted path of travel from near to distant intersecting with the predicted path of travel at the location of the first obstacle intersecting with the predicted path of travel.
33. The computer-implemented method of claim 32, wherein the start point of the perspective-corrected sub-section or trimmed sub-section of the predicted path of travel corresponds to the start point of the predicted path of travel.
34. The computer-implemented method of claim 32, wherein an obstacle is identified as intersecting with the predicted path of travel if the obstacle has at least one expansion, at least one height, at least one orientation and/or at least one location exceeding at least one predefined threshold value concerning, respectively, the expansion, the height, the orientation and the location.
35. The computer-implemented method of claim 32, wherein, the ground's slope, the angle of driving slope and/or the vehicle's ground clearance is taken into account for identifying an intersecting obstacle; and obtaining the adapted path of travel further comprises the step of adapting a determined sub-section of the predicted path of travel based on object and/or scene classification relying on the 2D image data, the 3D data and/or the auxiliary data.
36. The computer-implemented method of claim 22, further comprising: displaying the 2D image with the adapted path of travel as overlay on at least one display unit of the vehicle, wherein the display unit comprises at least one monitor, at least one head-up display, at least one projector and/or at least one touch display; and displaying further at least one visualization of at least one end point of the adapted path of travel, the visualization being in form of at least one marking element which (a) is hugging the contour of the respective obstacle which defines the end of the adapted path of travel and (ii) is aligned with the most distant fragment of the adapted path of travel.
37. The computer-implemented method of claim 22, further comprising receiving the 2D image data and auxiliary data, wherein (i) the 2D image is represented by the 2D image data, (ii) the 2D image data is sampled 2D image data, (iii) the 3D data is sampled 3D data, (iv) the auxiliary data is sampled auxiliary data, (v) the 2D image data is received from at least one first data source, (vi) the 3D data is received from at least one second data source, (vii) the auxiliary data is received from at least one third data source, (vii) the 2D image data is associated with the respective 3D data, and each sample of the sampled 2D image data is associated with at least one sample of the sampled 3D data, and (ix) at least one part of the auxiliary data is based on the 3D data or is identical to at least one part of the 3D data.
38. The computer-implemented method of claim 37, wherein, the first data source, the second data source and the third data source include at least one time-of-flight (TOF) sensor, at least one LIDAR sensor, at least one ultrasonic sensor, at least one radar sensor, at least one camera sensor, at least one stereo camera, or at least two camera sensors arranged for stereo vision, and/or at least two of the first, second and third data sources are at least partly identical.
39. The computer-implemented method of claim 22, wherein the at least one part of the vehicle's environment represented in the 2D image is an environment to the rear or the front of the vehicle; and the steering angle is a current steering angle.
40. A data processing device comprising means for carrying out the steps of the method of claim 22.
41. A motor vehicle comprising at least one imaging system and a data processing device according to claim 40.
42. The motor vehicle according to claim 41, wherein the motor vehicle further comprises (a) at least one time-of-flight (TOF) sensor, (b) at least one LIDAR sensor, (c) at least one ultrasonic sensor, (d) at least one radar sensor, (e) at least one camera sensor adapted to evaluate the data of the camera sensor by means of at least one structure from motion approach, at least one scene classification approach and/or at least one object classification approach, (f) at least one stereo camera, (g) at least two camera sensors arranged for stereo vision and/or (h) at least one display unit.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0053] The following drawings show aspects of the invention for improving the understanding of the invention in connection with some exemplary illustrations, wherein
[0054]
[0055]
[0056]
[0057]
[0058]
[0059]
[0060]
[0061]
DETAILED DESCRIPTION
[0062]
[0063] In a step 101, 3D data of at least one part of the vehicle's environment represented in the 2D image is received. The environment is especially to the rear of the vehicle. Displaying the 2D image to a driver while reverse driving allows the driver to control the driving operation without looking back and furthermore, to be particularly aware of that part of the environment obscured by the vehicle body from the driver's field of view.
[0064] In a step 103 at least one predicted path of travel of the vehicle's wheels is determined, based on the steering angle which preferably is the current steering angle of the vehicle. This predicted path of travel can be conventionally used as overlay in the 2D image. It is well-known for the person skilled in the art how to determine such a conventional predicted path of travel and, therefore, it is not necessary to explain it in further details here.
[0065] When the predicted path of travel is displayed as overlay in the 2D image it forms together with the 2D image a combined 2D image.
[0066]
[0067] It is noted in general that it is not necessarily required that the combined 2D image is actually generated for the method properly operating. It might also be sufficient that the relationship between the predicted path of travel and the 2D image (and/or the respective 2D image data) is known or obtainable.
[0068] Based on the predicted path of travel, the 2D image data (which 2D image data represents the 2D image) and the 3D data an adapted path of travel is obtained. This adapted path of travel corresponds to at least one perspective-corrected sub-section of the predicted path of travel.
[0069] Obtaining the adapted path of travel comprises in a step 105 determining the sub-section of the predicted path of travel based on the predicted path of travel and/or 3D data. The start point of the sub-section of the predicted path of travel might correspond to the start point of the predicted path of travel. The end point of the sub-section of the predicted path of travel might be determined based on the location of the first obstacle along the predicted path of travel from near to distant intersecting with the predicted path of travel. In this regard the 3D data indicates obstacles in the environment possibly intersecting with the predicted path of travel and an obstacle is identified as intersecting with the predicted path of travel if the obstacle has at least one expansion, and/or at least one location exceeding at least one predefined threshold value concerning, respectively, the expansion, and the location. This means that the 3D data might indicate many obstacles but inly some of them or even none of them are actually intersecting dependent on e.g. the threshold values and other definitions in this regard.
[0070] Of course, if there is no obstacle identified being intersecting with the predicted path of travel, the sub-section of the predicted path of travel might comprises the entire predicted path of travel and the sub-section of the predicted path of travel is identical to the entire predicted path of travel. However, determining the sub-section allows to terminate the finally obtained adapted path of travel at obstacles which are not passable by the vehicle, e.g. because they are too large. Of course, this step can be also regarded as optional since the overlay would still appear as also hugging the large obstacle. However, it might be improving the understanding of the driver that the obstacle represents an impassable region by means of determining an appropriate sub-section.
[0071] Obtaining the adapted path of travel comprises in a step 107 fragmenting the sub-section of the predicted path of travel. This in turn comprises dividing the (sub-section of the) predicted path of travel into fragments. In this embodiment the fragments are equally-distributed along the (sub-section of the) predicted path of travel and are rectangular-shaped.
[0072]
[0073] Obtaining the adapted path of travel comprises further in a step 109 determining the adapted path of travel based at least on the 3D data associated via the 2D image data of the 2D image with each fragment. This is accomplished in a step 109a (which might be regarded as a sub step of step 109) by determining, for each fragment, the collection of 3D data corresponding to the part of the environment represented in the 2D image (or in the combined 2D image) enclosed by the boundaries of the fragment.
[0074] Thus, once the area in the 2D image enclosed by the boundaries of the fragment is determined, it is for example possible to determine the collection of 3D data since the 2D image data (representing the 2D image) is associated with the respective 3D data.
[0075] In a step 109b (which might be regarded as a sub step of step 109), for each fragment, at least one averaged value of, respectively, a slope and a height (i.e. certain properties) of the part of the environment corresponding to the collection of 3D data of that fragment is determined based at least on the collection of 3D data. In other words, in this embodiment a local averaged value of, respectively, the two properties (slope and height) of the part of the environment which is covered by the 3D data (hence, covered by the fragment in the 2D image/combined 2D image) is determined.
[0076] In a step 109c (which might be regarded as a sub step of step 109), for each fragment, the shape and/or the location of the fragment is adapted for creating the perspective-corrected appearance of the fragment when displayed as overlay in the 2D image. In this embodiment, that adaption is based on the averaged values but it is also possible to alternatively or in addition incorporate for example the 2D data, the location of the fragment or the extension of the fragment in the process of adapting the fragment. This adapting, in other words, basically means that the 2D style of the fragment, which can be regarded as part of the predicted path of travel, is adapted such that it appears that the fragment, when displayed in the 2D image as overlay, follows or hugs the contour (i.e. the topography), of the environment in that area of the 2D image.
[0077] In a step 109d (which might be regarded as a sub step of step 109), for each fragment, at least one normal vector associated with the part of the environment corresponding to the collection of 3D data of that fragment is determined. This determination is based on the collection of 3D date of that fragment and/or on the averaged value (determined in the step 107b). In other words, if for example the slope of the part of the environment represented by the collection of 3D data (i.e. covered by the fragment in the 2D image) is determined, based on that value the normal vector can be calculated.
[0078] Still in step 109d, next, at least one angle between that normal vector and at least one reference vector is calculated. For example, the reference vector might correspond to light rays emanating from a virtual light source. For example the light rays might be directional, i.e. the direction of the light does not depend on the position of the illuminated region.
[0079]
[0080] In a step 109e (which might be regarded as a sub step of step 109), for each fragment, the brightness of the color of the fragment is adapted based on the cosine of the angle calculated in step 109d.
[0081] In a step 109f (which might be regarded as a sub step of step 109), for each fragment, of the fragment is adapted based on the location of the fragment within the adapted path of travel. This might be in the present embodiment equivalent to setting the hue of the color of the fragment based on the distance between the fragment and the vehicle in the 2D image. Even if the vehicle is not shown in the 2D image, the person skilled in the art might understand that in such as case the distance is calculated based on the hypothetical position of the vehicle located outside of the 2D image.
[0082] The steps 109a-109f are repeated for each fragment unless all fragments have been processed and adapted, which then means that the adapted path of travel is obtained. In other words, each fragment is adapted (e.g. its shape, hue of color and brightness of color) so that the predicted path of travel is finally transformed to the adapted path of travel.
[0083] The adapted path of travel in this embodiment corresponds to the entirety of the adapted fragments. And if the adapted path of travel is displayed as overlay in the 2D image it appears to follow at least area by area at least one surface topography of the environment in the 2D image and it also appears to terminate at an obstacle representing a boundary of a region passable for the vehicle.
[0084] In a step 111 the 2D image is displayed with the adapted path of travel as overlay. In addition, it would be possible that also at least one visualization of the end of the adapted path of travel in form of at least one line-shaped marking element is displayed. The marking element then might hug the contour of the respective obstacle which defines the end of the adapted path of travel. It would be possible that the marking element is not displayed if there is no obstacle present which intersects with the predicted path of travel.
[0085]
[0086] Further, in
[0087]
[0088]
[0089]
[0090] The method 300 comprises the steps 301, 303, 305 and 307 which basically correspond to the steps 101, 103, 105 and 111, respectively, of the method 100 according to the first aspect of the invention described above with reference to the flow chart of
[0091] It is, therefore, not required to explain all these steps here again but reference is made to the respective passages provided above with respect to method 100 which apply here mutatis mutandis, too.
[0092] The method of flow chart 300, thus, determines the adapted path of travel based on a predicted path of travel and 3D data of the environment of the vehicle with essentially the same result as the method of flow chart 100 above do, but without adapting the predicted path of travel such that it appears to follow the topography.
[0093] The features disclosed in the claims, the specification, and the drawings maybe essential for different embodiments of the claimed invention, both separately or in any combination with each other.
REFERENCE SIGNS
[0094] 100 Flow chart [0095] 101 Step [0096] 103 Step [0097] 105 Step [0098] 107 Step [0099] 109 Step [0100] 109a Step [0101] 109b Step [0102] 109c Step [0103] 109d Step [0104] 109e Step [0105] 109f Step [0106] 111 Step [0107] 201 Combined 2D image [0108] 203 2D image [0109] 205 Path of travel [0110] 207 Combined 2D image [0111] 209 2D image [0112] 211 Path of travel [0113] 213 Fragment [0114] 215a, 215b Area [0115] 217a, 217b Normal vector [0116] 219a, 219b Light Ray [0117] 221a, 221b Angle [0118] 223, 223′, 223″ 2D image [0119] 225, 225′ 225″ Path of travel [0120] 227 Bend [0121] 229a, 229b, 229c, 229d Section [0122] 229a′, 229b′, 229c′ Section [0123] 229a″, 229b″ Section [0124] 231′, 231″ Marking element [0125] 233′ Curb [0126] 235″ Wall [0127] 300 Flow chart [0128] 301 Step [0129] 303 Step [0130] 305 Step [0131] 307 Step