METHOD AND DEVICE FOR THE DISTORTION-FREE DISPLAY OF AN AREA SURROUNDING A VEHICLE
20170341582 · 2017-11-30
Assignee
Inventors
Cpc classification
B60R2300/306
PERFORMING OPERATIONS; TRANSPORTING
H04N7/181
ELECTRICITY
B60R2300/602
PERFORMING OPERATIONS; TRANSPORTING
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
G06V20/58
PHYSICS
B60R2300/307
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/301
PERFORMING OPERATIONS; TRANSPORTING
International classification
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A camera surround view system for a vehicle includes at least one vehicle camera that supplies camera images processed by a data processing unit to generate an image of the surroundings. The image of the surroundings being displayed on a display unit. The data processing unit re-projects textures, which are detected by the vehicle cameras, on an adaptive re-projection surface, which is similar to the area surrounding the vehicle, the re-projection surface being calculated based on sensor data provided by vehicle sensors. The data processing unit adapts the re-projection surface depending on a position and an orientation of a virtual camera.
Claims
1. A camera surround view system for a vehicle, the system comprising: at least one vehicle camera supplying camera images; a data processing unit processing the supplied camera images and generating an image of the surroundings; and a display unit displaying the image of the surroundings; wherein the data processing unit re-projects textures, which are detected by the vehicle cameras, on an adaptive re-projection surface, which is similar to an area surrounding the vehicle, the re-projection surface being calculated on the basis of sensor data provided by vehicle sensors, wherein the sensor data provided by the vehicle sensors reproduce the area surrounding the vehicle, wherein the sensor data comprise parking distance data, radar data, LiDAR data, laser scanning data and/or movement data, and wherein the data processing unit adapts the re-projection surface depending on a position and an orientation of a virtual camera.
2. The camera surround view system of claim 1, wherein the calculated adaptive re-projection surface comprises a dynamically modifiable grid.
3. The camera surround view system of claim 2, wherein the grid of the re-projection surface is a three-dimensional grid which can be dynamically modified depending on the sensor data provided.
4. The camera surround view system of claim 1, wherein the display unit is a touchscreen and the position and the orientation of the virtual camera is adjusted by a user.
5. A driver assistance system for a vehicle having a camera surround view system, the camera surround view system comprising: at least one vehicle camera supplying camera images; a data processing unit that processes the supplied camera images to generate an image of the surroundings; and a display unit displaying the image of the surroundings; wherein the data processing unit re-projects textures, which are detected by the vehicle cameras, on an adaptive re-projection surface, which is similar to an area surrounding the vehicle, the re-projection surface being calculated on the basis of sensor data provided by vehicle sensors, wherein the sensor data provided by the vehicle sensors reproduce the area surrounding the vehicle, wherein the sensor data comprise parking distance data, radar data, LiDAR data, laser scanning data and/or movement data, and wherein the data processing unit adapts the re-projection surface depending on a position and an orientation of a virtual camera.
6. A method for a distortion-free display of an area surrounding a vehicle, the method comprising: generating camera images of the area surrounding the vehicle by vehicle cameras; processing the generated camera images to generate an image of the area surrounding the vehicle; and re-projecting textures detected by the vehicle cameras on an adaptive re-projection surface, which is similar to the area surrounding the vehicle, the re-projection surface calculated based on sensor data provided by vehicle sensors, wherein the sensor data provided by the vehicle sensors show the area surrounding the vehicle, and the sensor data include parking distance data, radar data, LiDAR data, laser scanning data and/or movement data; and adapting the re-projection surface depending on a position and/or an orientation of a virtual camera which supplies a bird's eye perspective camera image of the vehicle.
7. The method of claim 6, wherein the adaptive re-projection surface comprises a dynamically modifiable grid.
8. The method according to claim 7, wherein the grid of the re-projection surface is a three-dimensional grid which is dynamically modified depending on the sensor data provided.
9. The method of claim 8, wherein the position and the orientation of the virtual camera are adjusted by a user via a user interface.
10. A computer program having commands, which executes a method for a distortion-free display of an area surrounding a vehicle, the method comprising: generating camera images of the area surrounding the vehicle by vehicle cameras; processing the generated camera images to generate an image of the area surrounding the vehicle; and re-projecting textures detected by the vehicle cameras on an adaptive re-projection surface, which is similar to the area surrounding the vehicle, the re-projection surface calculated based on sensor data provided by vehicle sensors, wherein the sensor data provided by the vehicle sensors show the area surrounding the vehicle, and the sensor data include parking distance data, radar data, LiDAR data, laser scanning data and/or movement data; and adapting the re-projection surface depending on a position and/or an orientation of a virtual camera which supplies a bird's eye perspective camera image of the vehicle.
11. A road vehicle having a driver assistance system and a camera surround view system, the camera surround view system comprising: at least one vehicle camera supplying camera images; a data processing unit that processes the supplied camera images to generate an image of the surroundings; and a display unit displaying the image of the surroundings; wherein the data processing unit re-projects textures, which are detected by the vehicle cameras, on an adaptive re-projection surface, which is similar to an area surrounding the vehicle, the re-projection surface being calculated on the basis of sensor data provided by vehicle sensors, wherein the sensor data provided by the vehicle sensors reproduce the area surrounding the vehicle, wherein the sensor data comprise parking distance data, radar data, LiDAR data, laser scanning data and/or movement data, and wherein the data processing unit adapts the re-projection surface depending on a position and an orientation of a virtual camera.
Description
DESCRIPTION OF DRAWINGS
[0014]
[0015]
[0016]
[0017] Like reference symbols in the various drawings indicate like elements.
DETAILED DESCRIPTION
[0018] Referring to
[0019] The sensors 5 shown in
[0020] The re-projection surface calculated by the data processing unit 3 based on the sensor data may include a dynamically modifiable grid or mesh. In some examples, this grid of the re-projection surface is dynamically modified depending on the sensor data provided. The grid of the re-projection surface may be a three-dimensional grid. The re-projection surface calculated by the data processing unit 3 is not static, but can be dynamically and adaptively adjusted to the current sensor data that are supplied by the vehicle sensors 5. In some examples, these vehicle sensors 5 can include a mono front camera or a stereo camera. In addition, the sensor units 5 can include a LiDAR system which supplies data or a radar system which transmits radar data from the surroundings to the data processing unit 3. The data processing unit 3 may contain one or more microprocessors that process the sensor data and calculate a re-projection surface therefrom in real time. Textures, which are detected by the vehicle cameras 2, are projected or re-projected onto this calculated projection surface, which is similar to the area surrounding the vehicle. The display of the vehicle cameras 2 may vary. In some examples, the vehicle has four vehicle cameras 2 on four different sides of the vehicle. The vehicle may be a road vehicle, for example, a truck or a car. The textures of the surroundings detected by the camera 2 of the camera system are re-projected by the adaptive re-projection surface with the camera surround view system 1, to reduce or eliminate the aforementioned artifacts. Therefore, the quality of the area surrounding the vehicle shown is considerably improved by the camera surround view system 1. Objects in the area surrounding the vehicle, for example other vehicles parked in the vicinity or persons located in the vicinity, appear less distorted than in the case of systems which use a static re-projection surface.
[0021] The data processing unit 3 controls a virtual camera 6 as shown in
[0022]
[0023] In a first step S1, camera images of the area surrounding the vehicle are generated by cameras 2 of the vehicle F. For example, the camera images are generated by multiple vehicle cameras 2 that are mounted on different sides of the vehicle.
[0024] The generated camera images are then processed in step S2 to generate an image of the area surrounding the vehicle. In some examples, the processing of the generated camera images is carried out by a data processing unit 3, as shown in
[0025] In a further step S3, a re-projection surface is first calculated based on sensor data provided and subsequently textures, which are detected by the vehicle cameras, are re-projected on this adaptive calculated re-projection surface. The adaptive re-projection surface includes a dynamically modifiable grid which is dynamically modified depending on the sensor data provided. This grid may be a three-dimensional grid.
[0026] In a step S4, the re-projection surface is adapted by the data processing unit 3 depending on a position and/or an orientation of a virtual camera 6 that supplies a bird's eye perspective camera image of the vehicle F from above.
[0027] In some implementations, the method shown in
[0028] A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.