Method and device for determining the orientation of a surface of an object
12136275 ยท 2024-11-05
Assignee
Inventors
Cpc classification
G06V10/255
PHYSICS
G05D1/617
PHYSICS
G06V10/751
PHYSICS
B60W2552/53
PERFORMING OPERATIONS; TRANSPORTING
G06V20/58
PHYSICS
G06V20/588
PHYSICS
B60W30/09
PERFORMING OPERATIONS; TRANSPORTING
G05D1/619
PHYSICS
H04N13/264
ELECTRICITY
B60R2300/00
PERFORMING OPERATIONS; TRANSPORTING
B60W30/06
PERFORMING OPERATIONS; TRANSPORTING
G05D1/646
PHYSICS
H04N13/221
ELECTRICITY
International classification
G05D1/617
PHYSICS
B60W30/06
PERFORMING OPERATIONS; TRANSPORTING
B60W30/08
PERFORMING OPERATIONS; TRANSPORTING
B60W30/09
PERFORMING OPERATIONS; TRANSPORTING
G05D1/646
PHYSICS
G06V10/75
PHYSICS
G06V20/56
PHYSICS
Abstract
A method for determining the orientation of a surface of an object in a detection region before or behind a vehicle by means of a camera of the vehicle comprises the following steps: detecting a first image of the detection region by means of the camera, detecting a second image of the detection region following in time on the detecting of the first image and by means of the camera, generating of first image data corresponding to the first image and second image data corresponding to the second image, determining of eight image coordinates of four pixels each in the first image and the second image, corresponding to four points on the surface of the object, by means of the first image data and the second image data, determining of a normal vector of the surface of the object by means of the eight image coordinates, determining of the orientation of the surface of the object by means of the normal vector.
Claims
1. A method for determining an orientation of a surface of an object in a detection region before or behind a vehicle using a camera of the vehicle, comprising: detecting a first image of the detection region using the camera, detecting a second image of the detection region following in time on the detecting of the first image using the camera, generating first image data corresponding to the first image and second image data corresponding to the second image, determining eight image coordinates of four pixels each in the first image and the second image, corresponding to four points on the surface of the object, using the first image data and the second image data, determining a normal vector of the surface of the object using the eight image coordinates, determining the orientation of the surface of the object using the normal vector, and using the orientation by a system of autonomous driving of the vehicle to perform an avoidance or braking maneuver.
2. The method according to claim 1, wherein a homography is determined using the eight image coordinates, which converts the four pixels in the first image into the four pixels in the second image, wherein the homography is determined in the form of a homography matrix, and wherein the normal vector n is determined using the homography matrix.
3. The method according to claim 2, wherein the homography matrix is used to determine a distance between the camera and the surface of the object.
4. The method according to claim 2, wherein the homography matrix is determined in a numerical or analytical method.
5. The method according to claim 1, wherein the normal vector is used to determine a pitch angle, corresponding to a tilting of the normal vector about the transverse axis of the vehicle against the vertical axis of the vehicle-fixed system of coordinates, and wherein the pitch angle is used to determine the orientation of the surface of the object.
6. The method according to claim 1, wherein the first image data are used to determine an image region of the first image encompassing the surface of the object, and a subset of the first image data corresponding to the image region is used to determine image coordinates corresponding to the four pixels in the first image.
7. The method according to claim 6, wherein the image region is a rectangle minimally enclosing the surface of the object.
8. The method according to claim 1, wherein the first image data are used to determine, by using an image recognition method, the image coordinates corresponding to the four pixels in the first image.
9. The method according to claim 1, wherein the first image data, the second image data and the image coordinates corresponding to the four pixels in the first image are used to determine, making use of an image processing method, further image coordinates corresponding to the four pixels in the second image.
10. The method according to claim 1, wherein the orientation of the surface of the object is used to judge whether the vehicle can safely pass the object.
11. The method according to claim 1, wherein the orientation of the surface of the object is further processed as an input parameter of an object recognition method.
12. The method according to claim 1, wherein the orientation of the surface of the object is used to determine whether the object is a lane marking.
13. A device for determining an orientation of a surface of an object in a detection region before or behind a vehicle, comprising: a camera, which is adapted to: detect a first image of the detection region, detect a second image of the detection region following in time on the detecting of the first image, and generate first image data corresponding to the first image and second image data corresponding to the second image, and an image processing and evaluation unit, which is adapted to use the first image data and the second image data to determine eight image coordinates of four pixels each in the first image and the second image, corresponding to four points on the surface of the object, to determine, using the eight image coordinates, a normal vector n of the surface of the object, and to determine, using the normal vector n, the orientation of the surface of the object.
14. The device according to claim 13, wherein the camera is a mono camera.
15. The device according to claim 13, wherein the camera is firmly connected to the vehicle.
16. The method according to claim 8 wherein the image recognition method is a machine learning method.
17. The method according to claim 9 wherein the image processing method is a method based on optical flow or a machine learning method.
18. The device according to claim 13 wherein the vehicle is a highway vehicle.
Description
BRIEF DESCRIPTION OF THE SEVERAL VIEWS OF THE DRAWINGS
(1) Further features and benefits will emerge from the following description, explaining more closely embodiments in connection with the figures.
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION
(7)
(8) The object 12 is represented in
(9) The device 10 comprises a camera 16, which is designed as a mono camera and is firmly connected to the vehicle 14. The camera 16 is oriented in the direction of travel of the vehicle 14, such that it can detect a detection region on a traffic lane 18 in front of the vehicle 14. The device 10 moreover comprises an image processing and evaluation unit 20, which is connected by a cable 21 to the camera 16 and which forms a control unit of the device. The image processing and evaluation unit 20 is adapted to use image data generated from images 22, 24 (see
(10) By means of these eight image coordinates, the image processing and evaluation unit 20 determines a normal vector n of the surface 26 of the object 12. Moreover, the image processing and evaluation unit 20 determines by means of the normal vector n the orientation of the surface 26 of the object 12. The determination method is described more closely below with the aid of
(11)
(12) The position of the four points a to d on the surface 26 of the object 12 is represented in
(13) How the image coordinates of the four points a to d on the surface 26 of the object 12 will change depends on the one hand on the relative movement between the vehicle 14 and the object 12 and on the other hand on the orientation of the surface 26. Therefore, it is possible for the image processing and evaluation unit 20 to use the image data to determine the orientation of the surface 26 in the form of the normal vector n, being represented in
(14)
(15)
(16)
(17) In step S10, the sequence begins. In step 512, the camera 16 is used to record the first image 22 of the detection region. Next, in step S14, an image recognition method, especially a machine learning method, is used to determine whether a relevant object 12 is present in the first image 22.
(18) If no such object 12 can be found in the first image 22, the sequence ends in step S24. Otherwise, a suitable method is used to determine the image coordinates of the four pixels in the first image 22 corresponding to the four points a to d on the surface 26 of the object 12. The points are chosen such that three of the four points a to d are linearly independent.
(19) Next, following in time on step S12, the second image 24 of the detection region is recorded in step S16. In step S18, the image recognition method is used to determine whether the object 12 is present in the second image 24. If the object 12 cannot be found again in the second image 24, the sequence ends in step S24. Otherwise, a suitable method, especially one based on optical flow, is used to determine the image coordinates of the four pixels in the second image 24 corresponding to the four points a to d on the surface 26 of the object 12.
(20) In step S20, the eight image coordinates are used to determine a homography matrix H, which converts the image coordinates of the four pixels in the first image 22 into the image coordinates of the four pixels in the second image 24.
(21) Next, in step S22, the normal vector n is calculated from elements of the homography matrix H. For this, first of all the matrix S is determined by symmetrizing the homography matrix H:
S=H.sup.THI
(22) Here, I is the 33 unity matrix. In the following, the elements of the symmetrical matrix S are designated with s.sub.ij and the minor corresponding to the element s.sub.ij is designated as M.sub.s.sub.
(23)
(24) Where .sub.13 is the sign of the minor of S corresponding to the element s.sub.13.
(25) Next, in step S24, a pitch angle is determined from the normal vector n, corresponding to a tilting of the normal vector n about the y-axis, i.e., the transverse axis of the vehicle 14, against the z-axis, i.e., the vertical axis of the vehicle-fixed system of coordinates. The pitch angle is then used to determine the orientation of the surface 26 of the object 12. The method then ends in step S26.
(26) The detection region in the embodiment shown is in front of the vehicle 14. Of course, the embodiments of the method shown can also be applied to a region behind the vehicle 14.
(27) In general, in the following claims, the terms used should not be construed to limit the claims to the specific embodiments disclosed in the specification and the claims, but should be construed to include all possible embodiments along with the full scope of equivalents to which such claims are entitled.