DISPLAY OF A VEHICLE ENVIRONMENT FOR MOVING THE VEHICLE TO A TARGET POSITION
20220297605 · 2022-09-22
Assignee
Inventors
Cpc classification
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/308
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/303
PERFORMING OPERATIONS; TRANSPORTING
B60R1/27
PERFORMING OPERATIONS; TRANSPORTING
G08G1/168
PHYSICS
G06V20/58
PHYSICS
B60R2300/301
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60R1/27
PERFORMING OPERATIONS; TRANSPORTING
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
Abstract
The invention relates to a method for: displaying an environment (16) of a vehicle (10), the vehicle (10) having a camera-based environment detection system (14) for detecting the environment (16) of the vehicle (10); and moving the vehicle (10) to a target position (24) in the environment (16). The method comprises the steps: providing images of the environment (16) of the vehicle (10) using the camera-based environment detection system (14); generating a bird's eye view of an environment image (26) based on the images of the environment (16) of the vehicle (10) provided using the camera-based environment detection system (14); determining at least one target position (24) in the environment (16) of the vehicle (10); displaying the at least one target position (24) in a first superimposition plane which covers the environment (16) of the vehicle (10); and superimposing the first superimposition plane on the environment image (26). The invention also relates to a corresponding driving assistance system (10) designed to perform the above method.
Claims
1. A method of representing a surround of a vehicle, the vehicle having a camera-based surround capture system for capturing the surrounding of the vehicle for the purposes of moving the vehicle to a target position in the surrounding, the method comprising: providing images of the surrounding of the vehicle using the camera-based surround capture system; generating a surround image from a bird's eye view on the basis of the images of the surrounding of the vehicle provided by the camera-based surround capture system; determining at least one target position in the surrounding of the vehicles; representing the at least one target position in a first overlay plane which covers the surround of the vehicle; and overlaying the first overlay plane on the surround image.
2. The method as claimed in claim 1, further comprising: establishing a non-drivable area in the surrounding of the vehicle; representing the non-drivable area in a second overlay plane which covers the surrounding of the vehicle; and overlaying the second overlay plane on the surround image.
3. The method as claimed in claim 2, wherein the representation of the non-drivable area in a second overlay plane which covers the surrounding of the vehicle comprises a generation of a representation of the non-drivable area in a side view on the basis of the images of the surrounding of the vehicle that were provided by the camera-based surround capture system.
4. The method as claimed in claim 1, further comprising: establishing at least one obstacle in the surrounding of the vehicle; representing the at least one obstacle in a third overlay plane which covers the surrounding of the vehicle; and overlaying the third overlay plane on the surround image.
5. The method as claimed in claim 4, wherein the representation of the at least one obstacle in a third overlay plane comprises a representation of boundaries of the at least one obstacle.
6. The method as claimed in claim 4, further comprising: identifying the at least one obstacle, wherein the representation of the at least one obstacle in a third overlay plane which covers the surrounding of the vehicle comprises a representation of the at least one obstacle on the basis of the identification of the at least one obstacle.
7. The method as claimed in claim 4, wherein the representation of the at least one obstacle in a third overlay plane which covers the surrounding of the vehicle comprises a provision of a camera image of the at least one obstacle.
8. The method as claimed in claim 4, wherein the representation of the at least one obstacle in a third overlay plane which covers the surrounding of the vehicle comprises a distance-dependent representation of the at least one obstacle.
9. The method as claimed in claim 4, wherein the representation of the at least one obstacle in a third overlay plane which covers the surrounding of the vehicle comprises a generation of a representation of the at least one obstacle in a side view on the basis of the images of the surrounding of the vehicle provided by the camera-based surround capture system.
10. The method as claimed in claim 1, wherein the determination of at least one target position in the surrounding of the vehicle and/or the determination of a non-drivable area in the surrounding of the vehicle and/or the establishment of the at least one obstacle in the surrounding of the vehicle is implemented taking account of the images of the surround of the vehicle that were provided by the camera-based surround capture system.
11. The method as claimed in claim 1, further comprising: receiving sensor information from at least one lidar-based surround sensor, a radar sensor and/or a plurality of ultrasound sensors, which registers at least a portion of the surrounding of the vehicle, wherein the determination of at least one target position in the surrounding of the vehicle and/or the determination of a non-drivable area in the surrounding of the vehicle and/or the establishment of the at least one obstacle in the surrounding of the vehicle is implemented taking account of the sensor information of the at least one further surround sensor.
12. The method as claimed in claim 1, wherein the generation of a surround image from a bird's eye view on the basis of the images of the surrounding of the vehicle that were provided by the camera-based surround capture system comprises a generation of a bowl view-type surround image.
13. The method as claimed in claim 1, wherein the representation of the at least one target position in a first overlay plane which covers the surrounding of the vehicle comprises a representation of a trajectory for moving the vehicle to reach the target position.
14. The method as claimed in claim 13, wherein the representation of a trajectory for moving the vehicle to reach the target position comprises a representation of an area the vehicle passes over when driving along the trajectory.
15. The method as claimed in claim 1, further comprising: storing the images of the surrounding of the vehicle that were provided by the camera-based surround capture system, wherein the generation of a surround image in a bird's eye view comprises a generation of at least one first area of the surround image on the basis of images of the surrounding of the vehicle provided by the camera-based surround capture system and at least one second area with stored images of the surround of the vehicle.
16. A driver assistance system for representing a surrounding of a vehicle, comprising: a camera-based surround capture system for capturing the surrounding of the vehicle and a processing unit which receives images of the surrounding of the vehicle, wherein the driver assistance system is configured to perform the method as claimed in claim 1.
Description
[0043] In the drawings:
[0044]
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052] The driver assistance system 12 comprises a camera-based surround capture system 14 which carries out a 360° capture of a surround 16 of the vehicle 10. In this exemplary embodiment, the camera-based surround capture system 14, presented here for simplicity as an individual device, comprises four individual surround view cameras, which are not depicted individually in the figures and which are attached to the vehicle 10. In detail, one of the four cameras is attached to each side of the vehicle 10. The four cameras are preferably wide-angle cameras with an aperture angle of approximately 170°-180°. The four cameras provide four images in each case, the images together completely covering the surround 16 of the vehicle 10, that is to say facilitating a 360° view.
[0053] The driving assistance system 12 furthermore comprises a processing unit 18 which receives images from the camera-based surround capture system 14 via a data bus 20.
[0054] Moreover, the driving assistance system 12 comprises a surround sensor 22, which is in the form of a radar sensor or lidar-based sensor in this exemplary embodiment. The surround sensor 22 transfers sensor information relating to the surround 16 of the vehicle 10 to the processing unit 18 via the data bus 20. In an alternative embodiment, the surround sensor 22 is designed as an ultrasound sensor unit with a plurality of individual ultrasound sensors.
[0055] The driving assistance system 12 represents a driver assistance function or, in general, a driving assistance function, wherein the surround 16 of the vehicle 10 is registered in order to assist with the determination of target positions 24 and optionally to determine an optimal trajectory which a vehicle driver can follow in order to move the vehicle 10 to the target position 24.
[0056] Accordingly, the driving assistance system 12 in this exemplary embodiment is designed to carry out a method of representing the surround 16 of the vehicle 10 in order to move the vehicle 10 to a target position 24 in the surround 16. The method is reproduced in
[0057] Accordingly, the target position 24 in this exemplary embodiment is a parking space 24 for parking the vehicle 10.
[0058] The method starts with step S100, which comprises a provision of images of the surround 16 of the vehicle 10 by means of the camera-based surround capture system 14. The four individual images are in each case transmitted together via the data bus 20 to the processing unit 18 of the driving assistance system 12.
[0059] Step S110 relates to a generation of a surround image 26 from a bird's eye view on the basis of the images of the surround 16 of the vehicle 10 that were provided by the camera-based surround capture system 14. Accordingly, the surround image 26 is generated from processing the individual images which are provided together by the camera-based surround capture system 14. The individual images are processed and/or combined appropriately in order to generate the 360° view. A corresponding representation with the surround image 26 in the bird's eye view is shown in
[0060] In an alternative, fourth embodiment, which is depicted in
[0061] In an alternative, fifth embodiment, which is depicted in
[0062] Step S120 relates to a reception of sensor information from the surround sensor 22, which registers at least a portion of the surround 16 of the vehicle 10. The sensor information of the surround sensor 22 is transferred to the processing unit 18 via the data bus 20.
[0063] Step S130 relates to a determination of at least one target position 24 in the surround 16 of the vehicle 10. In this exemplary embodiment, the determination of the at least one target position 24 relates to the establishment of a parking space as a target position 24. In this exemplary embodiment, the at least one target position 24 is determined with taking account of the sensor information from the surround sensor 22 together with the sensor information from the camera-based surround capture system 14, that is to say the images provided by the camera-based surround capture system 14. The sensor information from the surround sensor 22 and the camera-based surround capture system 14 is processed together in order to register the at least one parking space 24. In the process, fusion of the sensor information of the surround sensor 22 and the camera-based surround capture system 14, optional per se, is carried out.
[0064] The at least one parking space 24 in the surround 16 of the vehicle 10 can be established in different ways. To this end, the sensor information from the surround sensor 22 can be processed directly together with the sensor information from the camera-based surround capture system 14, in order to determine the at least one parking space 24. Alternatively or in addition, a surround map may be generated on the basis of the sensor information and said surround map serves as a basis for establishing the parking space 24.
[0065] Step S140 relates to a representation of the at least one target position 24, that is to say the at least one parking space 24, in a first overlay plane, which covers the surround 16 of the vehicle 10. Representing the at least one parking space 24 in the first overlay plane relates to a representation of the parking space 24 for parking the vehicle 10. In this exemplary embodiment, the parking space 24 is represented by a boundary line, which completely surrounds the parking space 24. Alternatively or in addition, the parking space 24 can be represented by an area that is colored differently.
[0066] Step S150 relates to an establishment of a non-drivable area 30 in the surround 16 of the vehicle 10. The non-drivable area 30 can either be established directly or it is possible to initially establish a drivable area 28, and the non-drivable area 30 is established by inverting the drivable area 28.
[0067] The non-drivable area 30 in the surround 16 of the vehicle 10 is also established with taking account of the sensor information from the surround sensor 22 together with the sensor information from the camera-based surround capture system 14, that is to say the images provided by the camera-based surround capture system 14. The sensor information from the surround sensor 22 and the camera-based surround capture system 14 is processed together in order to register the non-drivable area 30. In the process, fusion of the sensor information of the surround sensor 22 and the camera-based surround capture system 14, optional per se, is carried out.
[0068] The non-drivable area 30 in the surround 16 of the vehicle 10 can be established in different ways. To this end, the sensor information from the surround sensor 22 can be processed directly together with the sensor information from the camera-based surround capture system 14, in order to determine the non-drivable area 30. Alternatively or in addition, a surround map may be generated on the basis of the sensor information and said surround map serves as a basis for establishing the non-drivable area 30.
[0069] Step S160 relates to a representation of the non-drivable area 30 in a second overlay plane which covers the surround 16 of the vehicle 10. The second overlay plane represents an additional overlay plane to the first overlay plane. In the representations of
[0070] In the fourth or fifth embodiment, depicted accordingly in
[0071] Step S170 relates an establishment of at least one obstacle 32 in the surround 16 of the vehicle 10.
[0072] The at least one obstacle 32 in the surround 16 of the vehicle 10 is also established with taking account of the sensor information from the surround sensor 22 together with the sensor information from the camera-based surround capture system 14, that is to say the images provided by the camera-based surround capture system 14. The sensor information from the surround sensor 22 and the camera-based surround capture system 14 is processed together in order to register the at least one obstacle 32. In the process, fusion of the sensor information of the surround sensor 22 and the camera-based surround capture system 14, optional per se, is carried out.
[0073] The at least one obstacle 32 can be established directly on the basis of the sensor information from the surround sensor 22 together with the camera-based surround capture system 14. Alternatively or in addition, a surround map may be generated on the basis of the sensor information and said surround map serves as a basis for establishing the at least one obstacle 32.
[0074] Step S180 relates to a representation of the at least one obstacle 32 in a third overlay plane which covers the surround 16 of the vehicle 10.
[0075] In the first exemplary embodiment, which is depicted in
[0076] In the second exemplary embodiment, which is depicted in
[0077] On the basis of the identification, a representation of the obstacle 32 is chosen in correspondence with the respective class used in the third overlay plane. In this exemplary embodiment, the obstacle 32 is represented in a top view in correspondence with the representation of the surround image 26. Alternatively, the obstacle 32 can be represented in a side view. Accordingly, the at least one obstacle 32 is represented on the basis of the identification.
[0078] In the third exemplary embodiment, which is depicted in
[0079] Step S190 relates to the surround image 26 being overlaid with the first, second and third overlay plane. Within the scope of overlaying, available parts of the surround image 26 are placed or complemented by the information of the overlay planes, for example by way of a partly transparent overlay. In this case, it is not necessary for the surround image 26 to completely fill an image area. In this case, the surround image 26 is only complemented with image information of the overlay planes. In this case, the overlay planes may in principle be arranged in any sequence and possibly overlay one another. The surround image 26 overlaid thus may be output by way of the user interface of the vehicle 10 and be displayed to the vehicle driver.
[0080] By overlaying the overlay planes on the surround image 26 there is a combined representation of the surround 16, as perceived by the vehicle driver, together with the park information, which in this case relates to a position of the parking space 24 in the surround 16 of the vehicle 10.
LIST OF REFERENCE SIGNS
[0081] 10 Vehicle [0082] 12 Driving assistance system [0083] 14 Camera-based surround capture system [0084] 16 Surround [0085] 18 Processing unit [0086] 20 Data bus [0087] 22 Surround sensor [0088] 24 Target position, parking space [0089] 26 Surround image [0090] 28 Drivable area [0091] 30 Non-drivable area [0092] 32 Obstacle