DISPLAY OF A VEHICLE ENVIRONMENT FOR MOVING THE VEHICLE TO A TARGET POSITION

20220297605 · 2022-09-22

Assignee

Inventors

Cpc classification

International classification

Abstract

The invention relates to a method for: displaying an environment (16) of a vehicle (10), the vehicle (10) having a camera-based environment detection system (14) for detecting the environment (16) of the vehicle (10); and moving the vehicle (10) to a target position (24) in the environment (16). The method comprises the steps: providing images of the environment (16) of the vehicle (10) using the camera-based environment detection system (14); generating a bird's eye view of an environment image (26) based on the images of the environment (16) of the vehicle (10) provided using the camera-based environment detection system (14); determining at least one target position (24) in the environment (16) of the vehicle (10); displaying the at least one target position (24) in a first superimposition plane which covers the environment (16) of the vehicle (10); and superimposing the first superimposition plane on the environment image (26). The invention also relates to a corresponding driving assistance system (10) designed to perform the above method.

Claims

1. A method of representing a surround of a vehicle, the vehicle having a camera-based surround capture system for capturing the surrounding of the vehicle for the purposes of moving the vehicle to a target position in the surrounding, the method comprising: providing images of the surrounding of the vehicle using the camera-based surround capture system; generating a surround image from a bird's eye view on the basis of the images of the surrounding of the vehicle provided by the camera-based surround capture system; determining at least one target position in the surrounding of the vehicles; representing the at least one target position in a first overlay plane which covers the surround of the vehicle; and overlaying the first overlay plane on the surround image.

2. The method as claimed in claim 1, further comprising: establishing a non-drivable area in the surrounding of the vehicle; representing the non-drivable area in a second overlay plane which covers the surrounding of the vehicle; and overlaying the second overlay plane on the surround image.

3. The method as claimed in claim 2, wherein the representation of the non-drivable area in a second overlay plane which covers the surrounding of the vehicle comprises a generation of a representation of the non-drivable area in a side view on the basis of the images of the surrounding of the vehicle that were provided by the camera-based surround capture system.

4. The method as claimed in claim 1, further comprising: establishing at least one obstacle in the surrounding of the vehicle; representing the at least one obstacle in a third overlay plane which covers the surrounding of the vehicle; and overlaying the third overlay plane on the surround image.

5. The method as claimed in claim 4, wherein the representation of the at least one obstacle in a third overlay plane comprises a representation of boundaries of the at least one obstacle.

6. The method as claimed in claim 4, further comprising: identifying the at least one obstacle, wherein the representation of the at least one obstacle in a third overlay plane which covers the surrounding of the vehicle comprises a representation of the at least one obstacle on the basis of the identification of the at least one obstacle.

7. The method as claimed in claim 4, wherein the representation of the at least one obstacle in a third overlay plane which covers the surrounding of the vehicle comprises a provision of a camera image of the at least one obstacle.

8. The method as claimed in claim 4, wherein the representation of the at least one obstacle in a third overlay plane which covers the surrounding of the vehicle comprises a distance-dependent representation of the at least one obstacle.

9. The method as claimed in claim 4, wherein the representation of the at least one obstacle in a third overlay plane which covers the surrounding of the vehicle comprises a generation of a representation of the at least one obstacle in a side view on the basis of the images of the surrounding of the vehicle provided by the camera-based surround capture system.

10. The method as claimed in claim 1, wherein the determination of at least one target position in the surrounding of the vehicle and/or the determination of a non-drivable area in the surrounding of the vehicle and/or the establishment of the at least one obstacle in the surrounding of the vehicle is implemented taking account of the images of the surround of the vehicle that were provided by the camera-based surround capture system.

11. The method as claimed in claim 1, further comprising: receiving sensor information from at least one lidar-based surround sensor, a radar sensor and/or a plurality of ultrasound sensors, which registers at least a portion of the surrounding of the vehicle, wherein the determination of at least one target position in the surrounding of the vehicle and/or the determination of a non-drivable area in the surrounding of the vehicle and/or the establishment of the at least one obstacle in the surrounding of the vehicle is implemented taking account of the sensor information of the at least one further surround sensor.

12. The method as claimed in claim 1, wherein the generation of a surround image from a bird's eye view on the basis of the images of the surrounding of the vehicle that were provided by the camera-based surround capture system comprises a generation of a bowl view-type surround image.

13. The method as claimed in claim 1, wherein the representation of the at least one target position in a first overlay plane which covers the surrounding of the vehicle comprises a representation of a trajectory for moving the vehicle to reach the target position.

14. The method as claimed in claim 13, wherein the representation of a trajectory for moving the vehicle to reach the target position comprises a representation of an area the vehicle passes over when driving along the trajectory.

15. The method as claimed in claim 1, further comprising: storing the images of the surrounding of the vehicle that were provided by the camera-based surround capture system, wherein the generation of a surround image in a bird's eye view comprises a generation of at least one first area of the surround image on the basis of images of the surrounding of the vehicle provided by the camera-based surround capture system and at least one second area with stored images of the surround of the vehicle.

16. A driver assistance system for representing a surrounding of a vehicle, comprising: a camera-based surround capture system for capturing the surrounding of the vehicle and a processing unit which receives images of the surrounding of the vehicle, wherein the driver assistance system is configured to perform the method as claimed in claim 1.

Description

[0043] In the drawings:

[0044] FIG. 1 shows a schematic depiction of a vehicle with a driving assistance system according to the first, preferred embodiment in a side view,

[0045] FIG. 2 shows a first exemplary depiction of the vehicle with the surround around the vehicle in correspondence with the first embodiment,

[0046] FIG. 3 shows a second exemplary depiction of the vehicle with the surround around the vehicle in correspondence with a second embodiment,

[0047] FIG. 4 shows a third exemplary depiction of the vehicle with the surround around the vehicle in correspondence with a third embodiment,

[0048] FIG. 5 shows a fourth exemplary depiction of the vehicle with the surround around the vehicle in the style of a bowl view and in correspondence with a fourth embodiment,

[0049] FIG. 6 shows a fifth exemplary depiction of the vehicle with the surround around the vehicle in the style of an adaptive bowl view and in correspondence with a fifth embodiment, and

[0050] FIG. 7 shows a flowchart of a method for representing the surround of the vehicle from FIG. 1 in correspondence with the vehicle and the driving assistance system of the first exemplary embodiment.

[0051] FIG. 1 shows a vehicle 10 with a driving assistance system 12 according to a first, preferred embodiment. In principle, the vehicle 10 is any vehicle 10, which is preferably designed for autonomous or partly autonomously maneuvering, for example for parking the vehicle 10. In the case of autonomous driving, the vehicle driver is allowed to already leave the vehicle 10 before a corresponding parking process is carried out.

[0052] The driver assistance system 12 comprises a camera-based surround capture system 14 which carries out a 360° capture of a surround 16 of the vehicle 10. In this exemplary embodiment, the camera-based surround capture system 14, presented here for simplicity as an individual device, comprises four individual surround view cameras, which are not depicted individually in the figures and which are attached to the vehicle 10. In detail, one of the four cameras is attached to each side of the vehicle 10. The four cameras are preferably wide-angle cameras with an aperture angle of approximately 170°-180°. The four cameras provide four images in each case, the images together completely covering the surround 16 of the vehicle 10, that is to say facilitating a 360° view.

[0053] The driving assistance system 12 furthermore comprises a processing unit 18 which receives images from the camera-based surround capture system 14 via a data bus 20.

[0054] Moreover, the driving assistance system 12 comprises a surround sensor 22, which is in the form of a radar sensor or lidar-based sensor in this exemplary embodiment. The surround sensor 22 transfers sensor information relating to the surround 16 of the vehicle 10 to the processing unit 18 via the data bus 20. In an alternative embodiment, the surround sensor 22 is designed as an ultrasound sensor unit with a plurality of individual ultrasound sensors.

[0055] The driving assistance system 12 represents a driver assistance function or, in general, a driving assistance function, wherein the surround 16 of the vehicle 10 is registered in order to assist with the determination of target positions 24 and optionally to determine an optimal trajectory which a vehicle driver can follow in order to move the vehicle 10 to the target position 24.

[0056] Accordingly, the driving assistance system 12 in this exemplary embodiment is designed to carry out a method of representing the surround 16 of the vehicle 10 in order to move the vehicle 10 to a target position 24 in the surround 16. The method is reproduced in FIG. 7 as a flowchart and is described below with additional reference to FIGS. 2 to 6. FIGS. 2 to 6 represent different representations of the surround 16 of the vehicle 10, which can all be generated with the same driving assistance system 12. A change in the representation merely requires an altered configuration or programming of the driving assistance system 12. The driving assistance system 12 in the first exemplary embodiment is designed for autonomous parking of the vehicle 10.

[0057] Accordingly, the target position 24 in this exemplary embodiment is a parking space 24 for parking the vehicle 10.

[0058] The method starts with step S100, which comprises a provision of images of the surround 16 of the vehicle 10 by means of the camera-based surround capture system 14. The four individual images are in each case transmitted together via the data bus 20 to the processing unit 18 of the driving assistance system 12.

[0059] Step S110 relates to a generation of a surround image 26 from a bird's eye view on the basis of the images of the surround 16 of the vehicle 10 that were provided by the camera-based surround capture system 14. Accordingly, the surround image 26 is generated from processing the individual images which are provided together by the camera-based surround capture system 14. The individual images are processed and/or combined appropriately in order to generate the 360° view. A corresponding representation with the surround image 26 in the bird's eye view is shown in FIGS. 2 to 4.

[0060] In an alternative, fourth embodiment, which is depicted in FIG. 5, the surround image 26 is generated in the style of a bowl view. The bowl view relates to a special view in the style of a bowl, in which the edges are pulled upward such that, in contrast to a representation in the bird's eye view, edges are represented at least partially in a side view. Consequently, the representation of the surround image 26 is implemented in the style of a bowl view.

[0061] In an alternative, fifth embodiment, which is depicted in FIG. 6, the surround image 26 is generated in the style of an adaptive bowl view.

[0062] Step S120 relates to a reception of sensor information from the surround sensor 22, which registers at least a portion of the surround 16 of the vehicle 10. The sensor information of the surround sensor 22 is transferred to the processing unit 18 via the data bus 20.

[0063] Step S130 relates to a determination of at least one target position 24 in the surround 16 of the vehicle 10. In this exemplary embodiment, the determination of the at least one target position 24 relates to the establishment of a parking space as a target position 24. In this exemplary embodiment, the at least one target position 24 is determined with taking account of the sensor information from the surround sensor 22 together with the sensor information from the camera-based surround capture system 14, that is to say the images provided by the camera-based surround capture system 14. The sensor information from the surround sensor 22 and the camera-based surround capture system 14 is processed together in order to register the at least one parking space 24. In the process, fusion of the sensor information of the surround sensor 22 and the camera-based surround capture system 14, optional per se, is carried out.

[0064] The at least one parking space 24 in the surround 16 of the vehicle 10 can be established in different ways. To this end, the sensor information from the surround sensor 22 can be processed directly together with the sensor information from the camera-based surround capture system 14, in order to determine the at least one parking space 24. Alternatively or in addition, a surround map may be generated on the basis of the sensor information and said surround map serves as a basis for establishing the parking space 24.

[0065] Step S140 relates to a representation of the at least one target position 24, that is to say the at least one parking space 24, in a first overlay plane, which covers the surround 16 of the vehicle 10. Representing the at least one parking space 24 in the first overlay plane relates to a representation of the parking space 24 for parking the vehicle 10. In this exemplary embodiment, the parking space 24 is represented by a boundary line, which completely surrounds the parking space 24. Alternatively or in addition, the parking space 24 can be represented by an area that is colored differently.

[0066] Step S150 relates to an establishment of a non-drivable area 30 in the surround 16 of the vehicle 10. The non-drivable area 30 can either be established directly or it is possible to initially establish a drivable area 28, and the non-drivable area 30 is established by inverting the drivable area 28.

[0067] The non-drivable area 30 in the surround 16 of the vehicle 10 is also established with taking account of the sensor information from the surround sensor 22 together with the sensor information from the camera-based surround capture system 14, that is to say the images provided by the camera-based surround capture system 14. The sensor information from the surround sensor 22 and the camera-based surround capture system 14 is processed together in order to register the non-drivable area 30. In the process, fusion of the sensor information of the surround sensor 22 and the camera-based surround capture system 14, optional per se, is carried out.

[0068] The non-drivable area 30 in the surround 16 of the vehicle 10 can be established in different ways. To this end, the sensor information from the surround sensor 22 can be processed directly together with the sensor information from the camera-based surround capture system 14, in order to determine the non-drivable area 30. Alternatively or in addition, a surround map may be generated on the basis of the sensor information and said surround map serves as a basis for establishing the non-drivable area 30.

[0069] Step S160 relates to a representation of the non-drivable area 30 in a second overlay plane which covers the surround 16 of the vehicle 10. The second overlay plane represents an additional overlay plane to the first overlay plane. In the representations of FIGS. 2 to 4, the non-drivable area 30 is marked by a uniform area with a given color such that the surround image 26 is overlaid by this area in the non-drivable area 30 and cannot be perceived.

[0070] In the fourth or fifth embodiment, depicted accordingly in FIGS. 5 and 6, a side view on the basis of the images of the surround 16 of the vehicle 10 that were provided by the camera-based surround capture system 14 is generated for representing the non-drivable area 30. In this case, the side view is preferably generated dynamically and adapted continuously. Such a side view corresponds to a representation as is used, for example, in a bowl view, as illustrated in FIG. 5, or an adaptive bowl view, which is illustrated in FIG. 6. In the adaptive bowl view, the side view is generated with reduced distortions, for the purposes of which appropriate image processing of the images from the camera-based surround capture system 14 is implemented.

[0071] Step S170 relates an establishment of at least one obstacle 32 in the surround 16 of the vehicle 10.

[0072] The at least one obstacle 32 in the surround 16 of the vehicle 10 is also established with taking account of the sensor information from the surround sensor 22 together with the sensor information from the camera-based surround capture system 14, that is to say the images provided by the camera-based surround capture system 14. The sensor information from the surround sensor 22 and the camera-based surround capture system 14 is processed together in order to register the at least one obstacle 32. In the process, fusion of the sensor information of the surround sensor 22 and the camera-based surround capture system 14, optional per se, is carried out.

[0073] The at least one obstacle 32 can be established directly on the basis of the sensor information from the surround sensor 22 together with the camera-based surround capture system 14. Alternatively or in addition, a surround map may be generated on the basis of the sensor information and said surround map serves as a basis for establishing the at least one obstacle 32.

[0074] Step S180 relates to a representation of the at least one obstacle 32 in a third overlay plane which covers the surround 16 of the vehicle 10.

[0075] In the first exemplary embodiment, which is depicted in FIG. 2, the representation of the at least one obstacle 32 in the third overlay plane comprises a representation of boundary lines of the at least one obstacle 32. In this exemplary embodiment, the boundary lines merely mark the sides of the obstacle 32 facing the vehicle 10. Alternatively or in addition, the obstacles 32 can be represented by an area that is colored differently. In this case, there is the distance-dependent representation of the obstacles 32 with different colors that is dependent on the distance, in a manner not depicted in FIG. 2. In this exemplary embodiment, close regions of an obstacle 32 are represented using a red color, for example, while distant regions of the obstacle 32 are represented using a green color or using a black or gray color. Such a representation lends itself especially to regions that are no longer actively registered by the surround sensors. This can indicate to the user that this region was previously registered by one of the surround sensors but there no longer is active registration.

[0076] In the second exemplary embodiment, which is depicted in FIG. 3, a step of identifying the at least one obstacle 32 is carried out initially. This comprises a classification of the obstacles 32, for example in order to identify third-party vehicles, trees, persons, buildings, garbage cans or other obstacles 32. In the exemplary embodiments shown here, the obstacles 32 are third-party vehicles.

[0077] On the basis of the identification, a representation of the obstacle 32 is chosen in correspondence with the respective class used in the third overlay plane. In this exemplary embodiment, the obstacle 32 is represented in a top view in correspondence with the representation of the surround image 26. Alternatively, the obstacle 32 can be represented in a side view. Accordingly, the at least one obstacle 32 is represented on the basis of the identification.

[0078] In the third exemplary embodiment, which is depicted in FIG. 4, representation of the at least one obstacle 32 in the third overlay plane comprises a provision of a camera image of the at least one obstacle 32, as was recorded by the camera-based surround capture system 14. As a result, a realistic representation of the at least one obstacle 32 is generated. In this case, the camera image is preferably generated in a top view in correspondence with the representation of the surround image 26, or the camera image is projected into the top view. Alternatively, the camera image of the at least one obstacle 32 is generated in a side view on the basis of the images of the surround 16 of the vehicle 10 that were provided by the camera-based surround capture system 14. Consequently, the at least one obstacle 32 is visualized in the style of a representation as is used in the case of an adaptive bowl view for example. The side view is preferably generated without distortions or with reduced distortions, for the purposes of which appropriate image processing of the images from the camera-based surround capture system 14 is carried out.

[0079] Step S190 relates to the surround image 26 being overlaid with the first, second and third overlay plane. Within the scope of overlaying, available parts of the surround image 26 are placed or complemented by the information of the overlay planes, for example by way of a partly transparent overlay. In this case, it is not necessary for the surround image 26 to completely fill an image area. In this case, the surround image 26 is only complemented with image information of the overlay planes. In this case, the overlay planes may in principle be arranged in any sequence and possibly overlay one another. The surround image 26 overlaid thus may be output by way of the user interface of the vehicle 10 and be displayed to the vehicle driver.

[0080] By overlaying the overlay planes on the surround image 26 there is a combined representation of the surround 16, as perceived by the vehicle driver, together with the park information, which in this case relates to a position of the parking space 24 in the surround 16 of the vehicle 10.

LIST OF REFERENCE SIGNS

[0081] 10 Vehicle [0082] 12 Driving assistance system [0083] 14 Camera-based surround capture system [0084] 16 Surround [0085] 18 Processing unit [0086] 20 Data bus [0087] 22 Surround sensor [0088] 24 Target position, parking space [0089] 26 Surround image [0090] 28 Drivable area [0091] 30 Non-drivable area [0092] 32 Obstacle