Method for determining a current distance and/or a current speed of a target object based on a reference point in a camera image, camera system and motor vehicle
11236991 · 2022-02-01
Assignee
Inventors
Cpc classification
G01B11/14
PHYSICS
International classification
G01B11/14
PHYSICS
Abstract
A method for determining a current distance and/or a current speed of a target object relative to a motor vehicle based on an image of the target object, in which the image is provided by a camera of the motor vehicle, where characteristic features of the target object are extracted from the image and a reference point associated with the target object is determined based on the characteristic features for determining the distance and/or the speed, wherein the distance and/or the speed are determined based on the reference point, and a baseline is determined in the image based on the characteristic features, which is in a transition area from the depicted target object to a ground surface depicted in the image, and a point located on the baseline is determined as the reference point.
Claims
1. A method for determining a distance and/or a speed of a target object relative to a motor vehicle based on at least one image of the target object, the method comprising: extracting characteristic features of the target object from the at least one image, the at least one image being provided by a single camera of a plurality of cameras of the motor vehicle; determining a reference point associated with the target object based on the characteristic features; determining the distance and/or the speed of the target object based on the reference point; and determining a baseline in the at least one image based on the characteristic features, the baseline being in a transition area where the target object intersects a ground surface in the at least one image, wherein a point located on the baseline is determined as the reference point, wherein the baseline is a straight line in the at least one image and delimits a bottom edge of the target object, and wherein determining the baseline comprises: determining an orientation of the baseline in the at least one image, determining an auxiliary line along the orientation and disposed above the target object in the at least one image, measuring a distance in the at least one image between each of the characteristic features and the auxiliary line, and selecting, among the characteristic features, a ground feature that has a greatest distance to the auxiliary line, wherein the baseline is parallel to the auxiliary line and extends through the ground feature, and wherein the characteristic features of the target object are tracked over a sequence of images, wherein optical flow vectors are determined as being the characteristic features, directional values of the optical flow vectors characterizing a direction of movement for each of the characteristic features over the sequence, and wherein an orientation of the baseline is determined based on the directional values of the optical flow vectors.
2. The method according to claim 1, wherein a subset of the directional values is selected from the directional values of the optical flow vectors by filtering, and wherein the orientation of the baseline is determined based on the selected subset of the directional values.
3. The method according to claim 2, wherein the filtering is performed by means of a histogram.
4. The method according to claim 3, wherein, for providing the histogram, a plurality of value intervals for the directional values are defined, wherein, a number of the directional values is determined for each of the plurality of value intervals, wherein the filtering includes that a main interval is detected, the main interval having the greatest number of the directional values, and wherein for the subset for determining the orientation of the baseline, exclusively those directional values are selected that are in the main interval, in particular in the main interval and additionally in preset value intervals around the main interval.
5. The method according to claim 2, wherein an average value is calculated from the selected subset of the directional values as the orientation of the baseline.
6. The method according to claim 1, wherein the ground feature represents an exterior feature of the target object at a greatest distance, among the characteristic features, from the auxiliary line in a direction perpendicular to the orientation of the baseline.
7. The method according to claim 1, wherein the reference point is determined as an intersection between the baseline and a lateral bounding line, which laterally bounds the target object in the image.
8. The method according to claim 7, wherein the bounding line is a line vertically oriented in the image, which extends through a characteristic feature, which represents an exterior feature of the target object in a direction perpendicular to the vertical bounding line.
9. A camera system for a motor vehicle comprising: a camera that provides at least one image of an environmental region of the motor vehicle; and an image processing device configured to perform the method according to claim 1.
10. A motor vehicle including the camera system according to claim 9.
Description
(1) Now, the invention is explained in more detail based on a preferred embodiment as well as with reference to the attached drawings.
(2) There show:
(3)
(4)
(5)
(6)
(7)
(8) A motor vehicle 1 illustrated in
(9) The camera 3 is a front camera disposed in the front area of the motor vehicle 1, for example on a front bumper 7. The camera 3 is therefore disposed on a front of the motor vehicle 1. The second camera 4 is for example a rearview camera, which is disposed in the rear area, for example on a rear bumper 8 or a tailgate. The lateral cameras 5, 6 can be integrated in the respective exterior mirrors.
(10) The first camera 3 captures an environmental region 9 in front of the motor vehicle 1. Correspondingly, the camera 4 captures an environmental region 10 behind the motor vehicle 1. The lateral cameras 5, 6 each capture an environmental region 11 and 12, respectively, laterally besides the motor vehicle 1. The cameras 3, 4, 5, 6 can for example be so-called fish-eye cameras having a relatively wide opening angle, which for example can be in a range of values from 160° to 200°. The cameras 3, 4, 5, 6 can be CCD cameras or CMOS cameras. They can also be video cameras, which each are able to provide a plurality of frames per second. These images are communicated to a central electronic image processing device 13, which processes the images of all of the cameras 3, 4, 5, 6.
(11) Optionally, the image processing device 13 can be coupled to an optical display device 14, which is for example an LCD display. Then, very different views can be presented on the display 14, which can be selected according to driving situation. For example, the image processing device 13 can generate an overall presentation from the images of all of the cameras 3, 4, 5, 6, which shows the motor vehicle 1 and its environment 9, 10, 11, 12 from a bird's eye view and thus from a point of view, which is located above the motor vehicle 1. Such a “bird eye view” is already prior art and can be generated by image processing.
(12) In the images of the cameras 3, 4, 5, 6, the image processing device 13 can also identify target objects, in particular other vehicles. Therein, an exemplary image 15 of one of the cameras 3, 4, 5, 6 is shown in
(13) Thus, characteristic features 18 are detected in the image 15, and those features 18 associated with the target object 17, are for example combined to a cluster. The target object 17 can then also be tracked over the sequence of the images, for example by means of the Lukas Kanade method.
(14) With reference now to
(15) With reference again to
(16) First, an orientation of the baseline 21 in the image 15 is determined, i.e. an angle α between the baseline 21 and the x axis of the image frame. In the determination of the orientation α, a histogram 23 according to
(17) Then, the mentioned average value is used as the orientation α of the baseline 21 according to
(18) The position of the baseline 21 is then defined by the ground feature 29 such that the baseline 21 extends through this ground feature 29. In other words, the auxiliary line 27 is displaced towards the ground feature 29.
(19) Then, the reference point 20 is found on the baseline 21. For this purpose, first, a straight and vertical bounding line 30 is defined, which extends parallel to the y axis of the image frame. Therein, this bounding line 30 extends through a feature 31 of the target object 17, which represents an exterior feature of the target object 17 in x direction and thus in the direction perpendicular to the bounding line 30 and therefore is located outermost. This feature 31 can also be referred to as “farther-most feature”. This feature 31 is closest to the camera of the motor vehicle 1—viewed in x direction.
(20) The reference point 20 of the image 15 is then defined as the intersection of the baseline 21 with the bounding line 30.
(21) Additionally or alternatively, the orientation α of the baseline 21 can also be determined based on a main extension direction 32 of the roadway 16. To this, the main extension direction 32 of the roadway 16 can first be detected based on the image 15. The direction 32 of the roadway 16 can be determined by a method such as Hough Transform, whenever visible and easily discernible.
(22) A flow diagram of the above described method is shown in
(23) As soon as the reference point 20 to the target object 17 is defined, the distance of the target object 17 (of the reference point 20) from the motor vehicle 1 and/or the relative speed (based on multiple images 15) can be determined.