Method and Vehicle Control System for Producing Images of a Surroundings Model, and Corresponding Vehicle
20190311523 ยท 2019-10-10
Assignee
Inventors
Cpc classification
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
G06T3/08
PHYSICS
G08G1/168
PHYSICS
G06T7/80
PHYSICS
B60R2300/304
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/20
PERFORMING OPERATIONS; TRANSPORTING
G05D1/0251
PHYSICS
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
B60K35/00
PERFORMING OPERATIONS; TRANSPORTING
G06T7/80
PHYSICS
Abstract
The invention relates to a method for producing images of a stored three-dimensional model (30) of the surroundings of a vehicle (20), said images having been corrected for perspective. A camera picture (32) is produced by a camera device (21) of the vehicle, and the produced camera picture (32) is projected onto a projection surface (31) in the surroundings model. A region (33) relevant for driving is marked in said stored three-dimensional surroundings model (30), and this marked region (33) is projected onto a corresponding projection surface area (35) of the projection surface (31). An image of the projection surface (31) with the region (33) projected onto the projection surface area (35) is produced and output by means of a virtual camera (72) that can move freely in the surroundings model (30).
Claims
1. A method for producing images of a stored three-dimensional model (30) of the surroundings of a vehicle (20), said images having been corrected for perspective, having the steps of: Producing (S1) at least one camera picture (32) by a camera device (21) of the vehicle (20); Projecting (S2) the produced camera picture (32) onto a projection surface (31) in the stored three-dimensional model (30) of the surroundings of the vehicle (20); Marking (S3) a region (33) relevant for driving in the stored three-dimensional surroundings model (30); Projecting (S4) the marked region (33) onto a corresponding projection surface area (35) of the projection surface (31) in order to identify an image region of the camera picture (32) projected onto the projection surface (31) corresponding to the marked region (33) as having been corrected for perspective; Producing (S5) an image of the projection surface (31) with the region (33) projected onto the projection surface area (35) by means of a virtual camera (72) that can move freely in the surroundings model (30); and Outputting (S6) the produced image.
2. The method according to claim 1, wherein the marked region is projected onto the projection surface (31), in that a respective point (37) of the marked region (33) is imaged onto an intersection (39) of a corresponding connecting line (38) with the projection surface (31), wherein the connecting line (38) connects this point (37) of the marked region to a predefined reference point (34) of the surroundings model (30).
3. The method according to claim 2, wherein the camera device (21) has at least one vehicle camera (21a-21d), and wherein the reference point (34) of the surroundings model (30) corresponds to a spatial position of that vehicle camera which produces the camera picture (32) imaged onto the projection surface (31).
4. The method according to claim 1, wherein a camera position and/or camera alignment of the freely movable virtual camera (72) is/are determined on the basis of sensor data produced by sensors (93) of the vehicle (20) and/or captured parameters of the vehicle (20).
5. The method according to claim 1, wherein a driver assistance system (94) controls a function of the vehicle (20) on the basis of the output images.
6. The method according to claim 1, wherein the output images are displayed to a driver of the vehicle (20) on a display apparatus.
7. The method according to claim 1, wherein the region relevant for driving is marked in the stored three-dimensional surroundings model (30) on the basis of sensor data produced by sensors (93) of the vehicle (20) and/or captured parameters of the vehicle (20).
8. A vehicle control system (90) of a vehicle (20) for producing images of a stored three-dimensional model (30) of the surroundings of the vehicle (20), said images having been corrected for perspective, having: a camera device (21) which is configured to produce at least one camera picture (32); a computing device (91) which is configured: to project the produced camera picture (32) onto a projection surface (31) in the stored three-dimensional model (30) of the surroundings of the vehicle (20), to mark a region (33) relevant for driving in the stored three-dimensional surroundings model (30), to project the marked region onto a corresponding projection surface area (35) of the projection surface (31), to identify an image region of the camera picture (32) projected onto the projection surface (31) corresponding to the marked region (33) as having been corrected for perspective, and to produce an image of the projection surface (31) with the region (33) projected onto the projection surface area (35) by means of a virtual camera (72) that can move freely in the surroundings model (30); and an output device (92) which is configured to output the produced image.
9. The vehicle control system (90) according to claim 8, wherein the computing device (91) is further configured to project the marked region (33) onto the projection surface (31), in that it images a respective point (37) of the marked region (33) onto an intersection (39) of a corresponding connecting line (38) with the projection surface (31), wherein the connecting line (38) connects this point (37) of the marked region (33) to a predefined reference point (34) of the surroundings model (30).
10. The vehicle control system (90) according to claim 9, wherein the camera device (21) has at least one vehicle camera (21a-21d), and wherein the reference point (34) of the surroundings model (30) corresponds to a spatial position of that vehicle camera which produces the camera picture (32) imaged onto the projection surface (31).
11. The vehicle control system (90) according to claim 8, having at least one sensor (93) of the vehicle (20) for producing sensor data and/or for capturing parameters of the vehicle (20); wherein the computing device (91) is further configured to mark the region relevant for driving in the stored three-dimensional surroundings model (30) on the basis of the produced sensor data and/or on the basis of the captured parameters of the vehicle (20).
12. The vehicle control system (90) according to claim 11, wherein the computing device (91) is further configured to determine a camera position and/or camera alignment of the freely movable virtual camera (72) on the basis of the produced sensor data and/or on the basis of the captured parameters of the vehicle (20).
13. The vehicle control system (90) according to claim 8, having a driver assistance system (93) which is configured to control a function of the vehicle (20) on the basis of the output images.
14. The vehicle control system (90) according to claim 8, wherein the output device (92) comprises a display apparatus, on which the output images can be displayed to a driver of the vehicle (20).
15. A vehicle (20) having a vehicle control system (90) according to claim 8.
Description
[0027] The present invention is explained in greater detail below on the basis of the embodiment examples indicated in the schematic figures of the drawings, wherein:
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038] In all of the figures, similar elements and apparatuses or respectively elements and apparatuses having similar functions are provided with the same reference numerals. The method steps are numbered for reasons of clarity. This is not, in general, intended to imply a specific temporal sequence. In particular, multiple method steps can be carried out simultaneously. Furthermore, various embodiments can be combined with each other at will, inasmuch as this makes sense.
[0039]
[0040] In
[0041] In a first method step S1, at least one camera picture of the surroundings of the vehicle 20 is produced by the camera device 21 of the vehicle 20.
[0042] A three-dimensional model 30 of the surroundings of the vehicle 20 is further provided, which is schematically illustrated in
[0043] In a second method step S2, the produced camera picture 32 is projected onto the projection surface 31.
[0044] In a further method step S3, a region 33 relevant for driving is marked in the stored three-dimensional surroundings model. The marking S3 is preferably carried out automatically, in particular on the basis of sensor data and/or parameters of the vehicle, which are preferably produced or respectively established by at least one sensor of the vehicle 20. Thus, a region 33 relevant for driving, which corresponds to an expected trajectory of the vehicle 20, is, for example, determined on the basis of an angular position of the wheels of the vehicle measured by the at least one vehicle sensor, and marked in the surroundings model. The region 33 relevant for driving can be identified with color in the surroundings model. A reference point 34 of the surroundings model 30 is further identified, which corresponds to a spatial position of that vehicle camera which produces the camera picture 32 imaged onto the projection surface 31.
[0045] With respect to this reference point 34, the marked region 33 relevant for driving is projected onto a corresponding projection surface area 35 of the projection surface 31 in a further method step S4. The projection surface area 35 is an image region of the camera picture projected onto the projection surface, which corresponds to the marked region 33, which is consequently identified in a way which indicates it has been corrected for perspective. To this end, an intersection 39 of the projection surface 31 with a connecting line 38 is determined for each point 37 of the marked region 33, wherein the connecting line 38 connects this point 37 to the reference point 34.
[0046] In a further method step S5, an image of the projection surface 31 is produced with the region 33 projected onto the projection surface area 35 by means of a virtual camera that can move freely in the surroundings model 30, and the produced image is output in a subsequent method step S6.
[0047]
[0048] A fundamental point of the invention is that the region relevant for driving is projected onto the projection surface and the projection surface with the region projected onto the projection surface area is imaged. This differs from the image which would be produced by acquiring the projection surface and the region relevant for driving itself.
[0049] By way of comparison, such a method is illustrated in
[0050] In contrast thereto,
[0051]
[0052] As a result, the method makes it possible to correct the perspective of the image by preventing an aberration effect.
[0053] According to one further embodiment, the camera position and/or camera alignment of the freely movable virtual camera 72 is/are determined on the basis of sensor data produced by sensors of the vehicle 20 and/or particular parameters of the vehicle 20. The camera position of the virtual camera can thus be continually and uniformly displaced, and appropriate continual images can be produced.
[0054] According to one embodiment, the produced images are output to a driver assistance system which controls a function of the vehicle on the basis of the output images. For example, an advantageous camera view for the driver assistance system can be selected, which is distinguished by an optimum perspective view, as a result of which the required computing time and computing performance of the driver assistance system for evaluation can be reduced. Starting from this camera perspective, the driver assistance system can control the vehicle partially autonomously or autonomously. It is only possible for the driver assistance system to control the vehicle precisely, if the regions relevant for driving are drawn in correctly.
[0055] According to one preferred further development, the output images are displayed to a driver of the vehicle 20 on a display apparatus of the vehicle 20.
[0056]
[0057] The vehicle control system 90 comprises a computing device 91 which is configured to project the produced camera picture onto a projection surface 31 in the stored three-dimensional surroundings model of the vehicle. The projection surface 31 can be predefined or can be determined by the computing device 91 itself.
[0058] The computing device 91 is further configured to mark a region relevant for driving in the stored three-dimensional surroundings model. To this end, the vehicle control system 90 can optionally have at least one vehicle sensor 93 which is configured to produce sensor data and/or capture parameters of the vehicle 20. Such vehicle sensors 93 comprise radar systems, lidar systems, optical cameras, infrared cameras or laser systems. The region relevant for driving which corresponds, for example, to a parking space, a trajectory of the vehicle or an obstacle, can be recognized on the basis of the sensor data by the computing device 91 and recorded and marked in the surroundings model 30.
[0059] The computing device 91 is further configured to project the marked region 33 onto a corresponding projection surface area 35 of the projection surface 31. As a result, the computing device 91 identifies an image region of the camera picture 32 projected onto the projection surface 31 corresponding to the marked area 32 in a way which indicates it has been corrected for perspective. The computing device 91 is further configured to produce an image of the projection surface with the region 33 projected onto the projection surface area 35 by means of a virtual camera 72 that can move freely in the surroundings model 30.
[0060] The vehicle control system 90 further has an output device 92 which is configured to output the produced image. The output device 92 can have an interface, in particular a cable connection, a USB interface or a wireless interface. The produced images can, in particular, be transmitted by means of the output device 92 to further units or via car-to-car communication to further vehicles.
[0061] Optionally, the vehicle control system 90 further comprises a driver assistance system 94 which is configured to control a function of the vehicle 20 on the basis of the output images.
[0062] According to one preferred further development, the output device 92 has a display apparatus which is arranged in an interior of the vehicle 20 and displays the output images to a driver of the vehicle 20.
[0063]
LIST OF REFERENCE NUMERALS
[0064] 20 Vehicle
[0065] 21 Camera device
[0066] 21a to 21d Vehicle cameras
[0067] 22, 23 Further road users
[0068] 24 Boundary posts
[0069] 30 Surroundings model
[0070] 31 Projection surface
[0071] 32 Camera picture
[0072] 33 Region relevant for driving
[0073] 34 Reference point
[0074] 35 Projection surface area
[0075] 36 Origin of coordinates
[0076] 37 Point of the marked region
[0077] 38 Connecting line
[0078] 39 Intersection
[0079] 40 First image
[0080] 54 Camera spatial point
[0081] 55 Displaced position
[0082] 60 Second image
[0083] 71 Further camera spatial point
[0084] 72 Virtual camera
[0085] 80 Third image
[0086] 90 Vehicle control system
[0087] 91 Computing device
[0088] 92 Output device
[0089] 93 Vehicle sensors
[0090] 94 Driver assistance system