Vehicle with surroundings-monitoring device and method for operating such a monitoring device
10710506 ยท 2020-07-14
Assignee
Inventors
Cpc classification
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/802
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/303
PERFORMING OPERATIONS; TRANSPORTING
H04N7/181
ELECTRICITY
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
G06V20/58
PHYSICS
B60R1/002
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/607
PERFORMING OPERATIONS; TRANSPORTING
G06T3/4038
PHYSICS
G06V20/56
PHYSICS
B60R2300/70
PERFORMING OPERATIONS; TRANSPORTING
International classification
H04N7/18
ELECTRICITY
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
G06T3/40
PHYSICS
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
Abstract
A vehicle with a surroundings-monitoring device contains an image-capturing device with two cameras and records surroundings images of the vehicle. A first camera is arranged in the region of a first edge of the vehicle, at which edge a first vehicle side face and a vehicle front face or a vehicle rear face abut one another. A second camera is arranged in the region of a second edge of the vehicle, which differs from the first edge, and at which a second vehicle side face which differs from the first vehicle side face and the vehicle front face or the vehicle rear face abut one another. The first camera device is arranged such that the image-detection region of the first camera includes at least part of the surroundings of the first vehicle side face and at least part of the surroundings of the vehicle front face or at least part of the surroundings of the vehicle rear face, and the image-detection region of the second camera includes at least part of the surroundings of the second vehicle side face and at least part of the surroundings of the vehicle front face or at least part of the surroundings of the vehicle rear face.
Claims
1. A vehicle having a surroundings monitoring device, which monitors surroundings of the vehicle, comprising an image capture device with at least two cameras which captures images of the surroundings of the vehicle and/or of the vehicle itself; a first camera arrangement of the image capture device, in the case of which a first camera is arranged in the region of a first edge of the vehicle at which a first vehicle side surface and a vehicle front surface or a vehicle rear surface converge, and in the case of which a second camera is arranged in the region of a second edge, which differs from the first edge, of the vehicle at which a second vehicle side surface, which differs from the first vehicle side surface, and the vehicle front surface or the vehicle rear surface converge; and an image evaluation device having individual object identification algorithms assigned to each camera, wherein the first camera arrangement is further arranged such that the image capture area of the first camera encompasses at least a part of surroundings of the first vehicle side surface between planes containing the front and rear surfaces of the vehicle, the image capture area of the second camera encompasses at least a part of surroundings of the second vehicle side surface between the planes containing the front and rear surfaces of the vehicle, and the image capture areas of the first and second cameras encompass at least a part of the surroundings of the vehicle front surface or at least a part of the surroundings of the vehicle rear surface between planes containing the first or second side surfaces.
2. The vehicle as claimed in claim 1, further comprising: a second camera arrangement of the image capture device, in the case of which a) a third camera is arranged at a third edge, which differs from the first and second edges, of the vehicle at which the first vehicle side surface and the vehicle front surface or the vehicle rear surface converge, and in the case of which a fourth camera is arranged at a fourth edge, which differs from the first, second and third edges, of the vehicle at which the second vehicle side surface and the vehicle front surface or the vehicle rear surface converge, wherein b) the image capture area of the third camera encompasses at least a part of the surroundings of the first vehicle side surface and at least a part of the surroundings of the vehicle front surface, if the at least one part of the surroundings of the vehicle front surface is not encompassed by the image capture area of the first camera, or encompasses at least a part of the surroundings of the vehicle rear surface, if the at least one part of the surroundings of the vehicle rear surface is not encompassed by the image capture area of the first camera, and c) the image capture area of the fourth camera encompasses at least a part of the surroundings of the second vehicle side surface and at least a part of the surroundings of the vehicle front surface, if the at least one part of the surroundings of the vehicle front surface is not encompassed by the image capture area of the second camera, or encompasses at least a part of the surroundings of the vehicle rear surface, if the at least one part of the surroundings of the vehicle rear surface is not encompassed by the image capture area of the second camera.
3. The vehicle as claimed in claim 2, wherein the first camera, and the second camera, and/or the third camera and the fourth camera, are arranged in each case in the region of a highest point on the respectively associated edge.
4. The vehicle as claimed in claim 3, wherein the first image capture area and the second image capture area and/or the third image capture area and the fourth image capture area have in each case a central axis which has a vertical component.
5. The vehicle as claimed in claim 4, further comprising: the image evaluation device is further configured such that a) the images captured by the first camera device and/or by the second camera device and input into the image evaluation device are projected into the ground plane by way of a homographic transformation, b) based on the images projected into the ground plane, at least one object possibly situated in the surroundings of the vehicle is identified by way of integrated object identification algorithms, and the position of said object relative to the vehicle is determined, c) the images projected into the ground plane are amalgamated in a single representation, and said representation is generated as an aerial perspective, d) the aerial perspective is input into an image display device in order to be displayed.
6. The vehicle as claimed in claim 5, further comprising: a warning device which interacts with the image evaluation device such that a warning signal is generated if at least one identified object undershoots a predefined minimum distance to the respective vehicle surface or to the vehicle.
7. The vehicle as claimed in claim 1, wherein the vehicle front surface forms the foremost surface of the vehicle and the vehicle rear surface forms the rearmost surface of the vehicle.
8. The vehicle as claimed in claim 1, wherein said vehicle is a commercial vehicle, and the vehicle front surface comprises a front surface of a driver's cab of the commercial vehicle.
9. The vehicle as claimed in claim 1, wherein the vehicle is a single vehicle or a vehicle combination.
10. The vehicle as claimed in claim 2, wherein on the first vehicle side surface and on the second vehicle side surface, there is additionally arranged in each case at least one further camera which captures a surroundings area of the vehicle not captured by the image capture areas of the first camera and of the second camera and/or of the third camera and of the fourth camera.
11. The vehicle as claimed in claim 1, wherein the first camera and the second camera are arranged in each case in the region of a highest point on the respectively associated edge.
12. The vehicle as claimed in claim 11, wherein the first image capture area and the second image capture area have in each case a central axis which has a vertical component.
13. The vehicle as claimed in claim 12, further comprising: an image evaluation device which is designed such that a) the images captured by the first camera device and/or by the second camera device and input into the image evaluation device are projected into the ground plane by way of a homographic transformation, b) based on the images projected into the ground plane, at least one object possibly situated in the surroundings of the vehicle is identified by way of integrated object identification algorithms, and the position of said object relative to the vehicle is determined, c) the images projected into the ground plane are amalgamated in a single representation, and said representation is generated as an aerial perspective, d) the aerial perspective is input into an image display device in order to be displayed.
14. The vehicle as claimed in claim 13, further comprising: a warning device which interacts with the image evaluation device such that a warning signal is generated if at least one identified object undershoots a predefined minimum distance to the respective vehicle surface or to the vehicle.
15. A method for operating a surroundings monitoring device of a vehicle, which surroundings monitoring device comprises at least one image capture device, one image evaluation device and one image display device, the method comprising the steps of: a) the image capture device comprises at least two cameras which are arranged at vehicle edges of the vehicle and whose image capture areas encompass at least a part of surroundings of a vehicle front surface between planes containing the vehicle front surface and a vehicle rear surface or of the vehicle rear surface between the planes containing the vehicle front surface and the vehicle rear surface, and at least a part of surroundings of two vehicle side surfaces between planes containing the front and rear surfaces of the vehicle, the image capture device being configured to capture images of the surroundings of the vehicle and input signals representing said images into the image evaluation device; b) the images captured by the image capture device and input into the image evaluation device, the image evaluation device having individual object identification algorithms for each camera, are projected into the ground plane by way of a homographic transformation; c) based on the images projected into the ground plane, at least one object possibly situated in the surroundings of the vehicle is identified by way of the individual object identification algorithms, and the position of said object relative to the vehicle is determined; d) the images projected into the ground plane are amalgamated in a single representation, and said representation is generated as an aerial perspective; e) the aerial perspective is input into the image display device in order to be displayed.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
DETAILED DESCRIPTION OF THE DRAWINGS
(17)
(18)
(19) As can be seen from
(20)
(21)
(22)
(23) With the preferred exemplary embodiment of the invention described below, the disadvantages that exist in the case or the above-described prior art are avoided.
(24)
(25)
(26) As is easily contemplated on the basis of
(27) Such transition or vehicle edges are therefore to be understood to mean substantially vertical and edge-like lines of convergence, and also rounded lines of convergence, at which the vehicle front surface 14 converges on or transitions into the two vehicle side surfaces 10, 12, and the latter converge on or transition into the vehicle rear surface 16. Accordingly, the vehicle edges form lines or linear structures on the outer skin or the bodyshell of the commercial vehicle at which the vehicle front surface 14 and the vehicle rear surface 16 transition into the two vehicle side surfaces 10, 12 with a change in direction, for example of 90 degrees.
(28) In the case of a first camera arrangement of the image capture device, as per
(29) As can be seen from
(30) Furthermore, the image capture device 110 also comprises a second camera device, in the case of which a third camera 2c is arranged at a third edge, which differs from the first and second edges, of the commercial vehicle at which the first vehicle side surface 10 and the vehicle rear surface 16 converge, and in the case of which a fourth camera 2d is arranged at a fourth edge, which differs from the first, second and third edges, of the commercial vehicle 1 at which the second vehicle side surface 12 and the vehicle rear surface 16 converge.
(31) Here, the image capture area 3c of the third camera 2c encompasses at least a part of the surroundings of the first vehicle side surface 10 and at least a part of the surroundings of the vehicle rear surface 16. The image capture area 3d of the fourth camera 2d encompasses at least a part of the surroundings of the second vehicle side surface 12 and at least a part of the surroundings of the vehicle rear surface 16.
(32) In other words, it is then the case that, at all four vehicle edges, there is provided in each case one camera 2a to 2d, the image capture areas 3a to 3d of which cameras encompass in each case at least a part of the surroundings of a vehicle side surface 10, 12 and at least a part of the surroundings of the vehicle front surface 14 or of the vehicle rear surface 16. Thus, all-round monitoring of the vehicle surroundings is possible with only four cameras, as emerges from
(33) In alternative embodiments, it is also possible for only a first camera device having a first camera 2a and having a second camera 2b at the two front vehicle edges or only a second camera device having a third camera 2c and having a fourth camera 2d at the two rear vehicle edges to be provided, wherein it is then consequently the case that either the front surroundings and the two side surroundings or the rear surroundings and the two side surroundings of the commercial vehicle 1 are monitored.
(34) As emerges in particular from
(35) If it is assumed that the funnel-shaped or cone-shaped image capture areas 3a to 3d have in each case one imaginary central axis, said central axes then, when viewed as a vector, have in each case one vertical component. In other words, the central axes of the image capture areas 3a to 3d of the four cameras 2a to 2d then point downward.
(36)
(37)
(38)
(39)
(40)
(41) To obtain an improved single representation 8 of the object 6 in the aerial perspective, the blending axis 9 has firstly, as per
(42) If, in the case of the above-described surroundings monitoring with four cameras 2a to 2d, monitoring gaps arise in particular in the case of long vehicles, then as per the embodiment shown in
(43) In addition to the four cameras 2a to 2d arranged at the vehicle edges, it is possible for any desired number of further cameras to be arranged on all surfaces of the commercial vehicle and in particular on the vehicle side surfaces 10, 12, on the vehicle front surface 14 and on the vehicle rear surface 16.
(44)
(45)
(46) As per step 1 of
(47) In step 2, it is for example the case that the distortion of the fish-eye lenses is compensated in the image evaluation device.
(48) In step 3, the images, compensated with regard to distortion, are subject to a homographic transformation in order to transform the images into a ground contact surface of the vehicle or into the ground surface.
(49) Then (step 4), the images transformed into the ground surface are analyzed, in particular compared with one another, in the image evaluation device. The evaluation or analysis of the images is for example performed with regard to whether the images projected into the ground surface 18 differ, and if so, whether it is then the case that at least one three-dimensional object with a height projecting above the ground surface 18 is situated within the image capture areas 3a to 3d of the cameras 2a to 2d. This is because, in this case, an object also appears in different representations from different capture angles such as are provided by multiple cameras 2a to 2d, as has already been illustrated above in
(50) Then, in step 5, the position of the identified object or of the identified objects in the surroundings of the vehicle is determined from the captured stereo images projected onto the ground surface and from the different representations (viewing angles) from the different cameras.
(51) Then, in step 6, the images projected onto the ground surface are amalgamated into a single image (stitching or blending) in order to obtain an aerial perspective of the entire surroundings of the vehicle. At the same time, depending on the determined position of the object or depending on the determined positions of the identified objects, it is also possible for a warning signal to be generated if necessary.
(52) The execution of steps 2 to 6, which relate to the evaluation of the images captured in step 1, is performed in the image evaluation device 120 (see
(53) Finally, in a step 7, the individual images amalgamated to form a single representation (stitching or blending) are displayed by way of the image display device 130, for example on a monitor in the driver's cab 140. Here, it is additionally also possible for the warning signal to be output visually and/or acoustically.
(54) In summary, the method as per
(55) a) At least one image capture device 110, which comprises at least two cameras 2a, 2b and 2c, 2d which are arranged at vehicle edges of the vehicle and whose image capture areas 3a, 3b and 3c, 3d encompass at least a part of the surroundings of a vehicle front surface 14 or of a vehicle rear surface 16 and at least a part of the surroundings of the two vehicle side surfaces 10, 12, captures images of the surroundings of the vehicle 1 and inputs signals representing said images into an image evaluation device 120.
(56) b) The images captured by the image capture device 110 and input into the image evaluation device 120 are projected into the ground plane 18 by way of a homographic transformation.
(57) c) Based on the images projected into the ground plane 18, at least one object 6 situated in the surroundings of the vehicle is identified by way of integrated object identification algorithms, and the position of said object relative to the vehicle 1 is determined.
(58) d) The images projected into the ground plane 18 are amalgamated in a single representation 8 (stitched), and said representation 8 is generated as an aerial perspective.
(59) e) The aerial perspective 8 is input into the image display device 130 in order to be displayed there.
(60) f) Depending on the determined position of an identified object 6, a warning signal is generated.
(61) The warning device 135 (
(62) The above step f) and the warning device 135 are merely optional, and are provided for example in the context of a driver assistance system. By contrast, it may be sufficient if the driver can assess whether or not a risk of collision exists on the basis of the (all-round) representation 8 shown on the image display device 130.
LIST OF REFERENCE DESIGNATIONS
(63) 1 Vehicle 2 Camera 3 Image capture area 4 Overlap area 5 Stereo recording area 6 Aerial perspective of a single camera 7 Stereo image of at least two cameras 8 Display of amalgamated images 9 Blending axis 10 First vehicle side surface 12 Second vehicle side surface 14 Vehicle front surface 16 Vehicle rear surface 100 Surroundings monitoring device 110 Image capture device 120 Image evaluation device 130 Image display device 135 Warning device 140 Driver's cab
(64) The foregoing disclosure has been set forth merely to illustrate the invention and is not intended to be limiting. Since modifications of the disclosed embodiments incorporating the spirit and substance of the invention may occur to persons skilled in the art, the invention should be construed to include everything within the scope of the appended claims and equivalents thereof.