METHOD FOR ASCERTAINING A SPATIAL ORIENTATION OF A TRAILER
20220258800 · 2022-08-18
Inventors
Cpc classification
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/808
PERFORMING OPERATIONS; TRANSPORTING
B60D1/62
PERFORMING OPERATIONS; TRANSPORTING
G06V10/44
PHYSICS
B60Y2300/28
PERFORMING OPERATIONS; TRANSPORTING
B62D13/06
PERFORMING OPERATIONS; TRANSPORTING
B62D13/00
PERFORMING OPERATIONS; TRANSPORTING
G06V20/56
PHYSICS
B62D15/021
PERFORMING OPERATIONS; TRANSPORTING
International classification
B62D15/02
PERFORMING OPERATIONS; TRANSPORTING
B60R11/04
PERFORMING OPERATIONS; TRANSPORTING
B62D13/06
PERFORMING OPERATIONS; TRANSPORTING
G06V10/44
PHYSICS
Abstract
A method for ascertaining a spatial orientation of a trailer of an autonomously driving towing vehicle with trailer. The method includes the steps of reading in image data of at least one rear-facing camera, assigning image points of the image data to the trailer or to the vehicle surrounding environment, ascertaining a rear trailer edge or at least one point of the rear trailer contour from the image points assigned to the trailer, and determining the trailer angle as a function of image coordinates of the trailer edge or of the point, the dimensions of the trailer, and the position of the camera relative to the support point of the trailer.
Claims
1-10. (canceled)
11. A method for ascertaining a spatial orientation of a trailer of an autonomously driving towing vehicle with trailer, comprising the following steps: reading in image data of at least one rear-facing camera; assigning image points of the image data to the trailer or to the vehicle surrounding environment; ascertaining a rear trailer edge or at least one point of a rear trailer contour, from the image points assigned to the trailer; and determining a trailer angle as a function of image coordinates of the trailer edge or of the point, dimensions of the trailer, and a position of the camera relative to a support point of the trailer.
12. The method as recited in claim 11, wherein the assignment of the image points is carried out using a machine learning model that was trained on the assignment of the image points to the trailer.
13. The method as recited in claim 11, wherein the ascertaining of the rear trailer edge or of the point is determined using a machine learning model.
14. The method as recited in claim 11, wherein an image processing algorithm is used to ascertain the rear trailer edge or the point.
15. The method as recited in claim 11, wherein the ascertaining of the rear trailer edge or of the point and the distance thereto is determined using a disparity ascertained by a stereo camera.
16. The method as recited in claim 11, wherein the trailer angle is calculated based on geometric and trigonometric relations between the image coordinates, the position of the camera relative to the support point of the trailer, and the dimension of the trailer.
17. The method as recited in claim 11, wherein the trailer angle is determined based on a lower point of the trailer edge or an outermost point of the rear trailer contour.
18. The method as recited in claim 11, wherein the longitudinal and lateral position of the trailer edge or of the point, relative to the support point of the trailer or of the camera, is determined based on the trailer angle.
19. The method as recited in claim 11, wherein the dimensions of the trailer are ascertained via sensors of the towing vehicle with trailer.
20. A device, comprising: at least two rear-facing cameras configured to acquire image data; and a processing unit configured to ascertain a spatial orientation of a trailer of an autonomously driving towing vehicle with trailer, the processing unit configured to ascertain a rear trailer edge or at least one point of a rear trailer contour, and to ascertain a trailer angle as a function of image coordinates of the trailer edge or of the point, dimensions of the trailer, and a position of the camera relative to a support point of the trailer.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0023]
[0024]
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
[0025]
[0026] In a first step A, the image data recorded by a rear-facing camera 18 (see
[0027] In a next step C, on the basis of trailer 10 recognized in this way, a rear trailer edge 26 is ascertained. For this purpose, the image points of trailer 10 are inputted into a machine learning model that, based on the training, recognizes rear trailer edge 26. It is also possible to use only a point of rear trailer edge 26 for the further method. In particular, for this purpose a point 26 is used that is situated in a lower region of trailer edge 26. Such a point 26 has the advantage that a rolling movement of trailer 10, caused by a curve, will have only a small influence on trailer angle α.
[0028] In a further step D, the trailer angle α is determined. For this step, if the dimensions of trailer 10 are not already stored then they have to be determined ahead of time. The determining of the dimensions only has to be carried out at the beginning of the trip. This can be ascertained for example using a radar sensor installed in the towing vehicle with trailer 14.
[0029]
[0030] The lines SF show a field of view of camera 18 in an optimally oriented position. In contrast, lines SF′ show a field of view of a camera 18 that is displaced from the optimal position by a correction angle γ. This correction angle γ has to be taken into account in the calculation as a correction, because a center axis x of camera 18 is also displaced by this angle. Here, the point distance x.sub.w represents the distance of point 26 from camera 18 along center axis x. This value is calculated according to x.sub.w=r.Math.cos(α+α.sub.0)+c.sub.x. Because the calculation of the values α, α.sub.0, r is calculated starting from support point 34, for the calculation of the point distance x.sub.w the camera distance c.sub.x between support point 34 and camera 18 also has to be added. Likewise, the point distance y.sub.w represents the distance of the point from camera 18 along transverse axis y. This value is calculated according to y.sub.w=r.Math.sin(α+α.sub.0)−c.sub.y. Here, the camera distance c.sub.y, which corresponds to the distance between support point 34 and camera 18, has to be subtracted.
[0031] An image of trailer 10 is recorded in camera 18 on an image plane 38 formed for example by an image sensor. For clearer illustration, the image plane 38 in front of camera 18 is shown. The value y.sub.i here describes the image distance between center axis x and point 26 of trailer edge 26 on image plane 38. The image distance y.sub.i can be determined. The image distance y.sub.i for focal length f here behaves in the same way as the point distance y.sub.w relative to x.sub.w. In this way, including the correction angle γ, the following results:
[0032] If the equations for x.sub.w and y.sub.w are substituted in this equation, the following results:
[0033] If this equation is solved for trailer angle α, then trailer angle α can be ascertained on the basis of the image on camera 18, the dimensions of trailer 10, and the position of camera 18 relative to support point 34. By substituting the trailer angle α determined in this way in the equations for x.sub.w and y.sub.w, the longitudinal and lateral position of point 26 of trailer edge 26 can then also be determined.