Method for tracking a target vehicle approaching a motor vehicle by means of a camera system of the motor vehicle, camera system and motor vehicle

10276040 ยท 2019-04-30

Assignee

Inventors

Cpc classification

International classification

Abstract

The invention relates to a method for tracking a target vehicle (9) approaching a motor vehicle (1) by means of a camera system (2) of the motor vehicle (1). A temporal sequence of images (10) of an environmental region of the motor vehicle (1) is provided by means of at least one camera (3) of the camera system (2). The target vehicle (9) is detected in an image (10) of the sequence by means of an image processing device (5) of the camera system (5) based on a feature of a front (11) or of a rear of the target vehicle (9) and then the target vehicle (9) is tracked over subsequent images (10) of the sequence based on the detected feature. Wherein at least a predetermined feature of a lateral flank (14) of the target vehicle (9) is detected in one of the subsequent images (10) of the sequence by the image processing device (5), and after detection of the feature of the lateral flank (14), the target vehicle (9) is tracked over further images (10) of the sequence based on the feature of the lateral flank (14).

Claims

1. A method for tracking a target vehicle approaching a motor vehicle by a camera system of the motor vehicle, the method comprising: providing a temporal sequence of images of an environmental region of the motor vehicle by at least one camera of the camera system; detecting the target vehicle in an image of the sequence by an image processing device of the camera system based on a feature of a front or a rear of the target vehicle; determining a first confidence value by the image processing device, which indicates the reliability of the detection of the feature of the front or of the rear of the target vehicle in tracking the target vehicle; tracking the target vehicle over subsequent images of the sequence based on the detected feature; detecting at least a predetermined feature of a lateral flank of the target vehicle in one of the subsequent images of the sequence by the image processing device, wherein the detection of the predetermined feature of the lateral flank is only effected when the first confidence value falls below a preset first threshold value; and after detection of the feature of the lateral flank: tracking the target vehicle over further images of the sequence based on the feature of the lateral flank.

2. The method according to claim 1, wherein a wheel arch of the target vehicle is detected as the feature of the lateral flank.

3. The method according to claim 1, wherein a wheel of the target vehicle is detected as the feature of the lateral flank.

4. The method according to claim 1, wherein the feature of the lateral flank is described with a Hough circle transform.

5. The method according to claim 1, further comprising: determining a second confidence value by the image processing device, which indicates the reliability of the detection of the feature of the lateral flank, wherein the tracking of the target vehicle by at least the feature of the front or of the rear of the target vehicle is effected if a preset second threshold value falls below the second confidence value, and the tracking of the target vehicle by at least the feature of the lateral flank is effected if the preset first threshold value falls below the first confidence value.

6. The method according to claim 1, wherein the target vehicle is tracked by the feature of the lateral flank if a predetermined distance between the motor vehicle and the target vehicle falls below a preset threshold value.

7. The method according to claim 1, wherein depending on a relative position of the target vehicle with respect to the motor vehicle in vehicle longitudinal direction: a front wheel arch or a rear wheel arch and/or a front wheel or a rear wheel are detected as the feature of the lateral flank.

8. The method according to claim 1, wherein the feature of the lateral flank is generalized by a Douglas-Peucker algorithm.

9. The method according to claim 1, wherein the detection of the front or of the rear further comprises determining a that a bounding box by the image processing device, in which the front or the rear is depicted, and exclusively a region of interest is taken as a basis for the detection of the feature of the lateral flank, which is determined depending on the bounding box.

10. The method according to claim 1, wherein at least at a transition from tracking based on the front or the rear to tracking based on the lateral flank, the tracking of the target vehicle is supported by a Kalman filter.

11. The method according to claim 1, wherein the sequence of the images is provided by the at least one camera, the field of view of which has an opening angle greater than 150 in particular greater than 160 still more preferred greater than 180.

12. The method according to claim 1, wherein in detecting the feature of the lateral flank, a geometric shape of the feature is taken into account depending on the calibration data of an external orientation of the camera and/or a position of the camera and/or distortion parameters of the camera.

13. The method according to claim 1, wherein the target vehicle is tracked in a blind spot area of the motor vehicle.

14. A camera system for a motor vehicle, comprising at least a camera for providing a sequence of images of an environmental region of the motor vehicle; and an image processing device configured to perform a method according to claim 1.

15. A motor vehicle passenger car, comprising a camera system according to claim 14.

Description

(1) Now, the invention is explained in more detail based on a preferred embodiment as well as with reference to the attached drawings.

(2) There show:

(3) FIG. 1 in schematic plan view a target vehicle approaching a motor vehicle with a camera system from behind;

(4) FIG. 2 in schematic illustration an image of the target vehicle, wherein the image is provided by means of a camera of the camera system attached to the rear of the motor vehicle and the front of the target vehicle is detected in the image;

(5) FIG. 3 in schematic illustration a gradient image with edges of the target vehicle;

(6) FIG. 4 in schematic illustration a further image of the target vehicle, wherein a region of interest is determined and generalized edges are indicated by line segments;

(7) FIG. 5 in schematic illustration the image according to FIG. 4, wherein a feature of a lateral flank of the target vehicle, in particular a wheel arch and/or a wheel, is detected;

(8) FIG. 6 in schematic plan view an illustration analogous to FIG. 1, wherein the target vehicle has approached to the point that the feature of the lateral flank is tracked;

(9) FIG. 7 in schematic illustration yet a further image, in which the target vehicle is depicted in the position according to FIG. 6;

(10) FIG. 8 in schematic plan view an illustration analogous to FIG. 1, wherein the target vehicle has approached to the point that a rear feature of the lateral flank is detected; and

(11) FIG. 9 in schematic illustration yet a further image, in which the target vehicle is depicted in the position according to FIG. 8.

(12) In FIG. 1, a plan view of a motor vehicle 1 with a camera system 2 according to an embodiment of the invention is schematically illustrated. The camera system 2 includes a camera 3 with a field of view 4 and an image processing device 5, which can for example be integrated in the camera 3. However, this image processing device 5 can also be a component separate from the camera 3, which can be disposed in any position in the motor vehicle 1. In the embodiment, the camera 3 is disposed on the rear of the motor vehicle 1 and captures an environmental region behind the motor vehicle 1. However, an application with a front camera is also possible.

(13) The field of view 4 angularly extends over 180 behind the motor vehicle 1, in particular symmetrically with respect to the center longitudinal axis of the motor vehicle 1. The motor vehicle 1 is on a left lane 7 of a two-lane road 6, while another vehiclea target vehicle 9is on a right lane 8. The target vehicle 9 approaches the motor vehicle 1 from behind and presumably will overtake it.

(14) The camera 3 has a horizontal capturing angle , which can for example be in a range of values from 120 to 200, and a vertical capturing angle (not illustrated), which for example extends from the surface of the road 6 directly behind the motor vehicle 1 up to the horizon and beyond. These characteristics are for example allowed with a fish-eye lens.

(15) The camera 3 can be a CMOS camera or else a CCD camera or any image capturing device, by which target vehicles 9 can be detected.

(16) In the embodiment according to FIG. 1, the camera 3 is disposed in a rear region of the motor vehicle 1 and captures an environmental region behind the motor vehicle 1. However, the invention is not restricted to such an arrangement of the camera 3. The arrangement of the camera 3 can be different according to embodiment. For example, the camera 3 can also be disposed in a front region of the motor vehicle 1 and capture the environmental region in front of the motor vehicle 1. Several such cameras 3 can also be employed, which each are formed for detecting an object or target vehicle 9.

(17) The situation as it is illustrated in FIG. 1 and FIG. 2, can analogously also occur if the field of view 4 of the camera 3 is directed forwards in direction of travel or a front camera is employed. For example, this is the case if the motor vehicle 1 overtakes the target vehicle 9.

(18) The camera 3 is a video camera continuously capturing a sequence of images. The image processing device 5 then processes the sequence of images in real time and can recognize and track the target vehicle 9 based on this sequence of images. This means that the image processing device 5 can determine the respectively current position and movement of the target vehicle 9 relative to the motor vehicle 1.

(19) The camera system 2 is a blind spot warning system, which monitors a blind spot area 13 and is able to warn the driver of the motor vehicle 1 of a detected risk of collision with the target vehicle 9 with output of a corresponding warning signal. The blind spot area is an environmental region of the motor vehicle 1, which cannot or can only hardly be seen by a driver of the motor vehicle with the aid of side-view and/or rear-view mirrors. According to a definition of the blind spot area, it extends from the rear of the motor vehicle 1 by more than two vehicle lengths rearwards on adjacent lanes.

(20) FIG. 2 shows an exemplary image 10, which is provided by the camera 3 in the situation according to FIG. 1. The target vehicle 9 is detected in the image 10 based on a feature of a front 11 of the target vehicle 9 by means of the image processing device 5. This detection is identified with a rectangular frame or a bounding box 12 in FIG. 2. This bounding box 12 is output by the detection algorithm, which is executed by the image processing device 5 for detecting the target vehicle 9.

(21) The detection of the target vehicle 9 is first performed based on features of the front 11 of the target vehicle 9. However, the view of the target vehicle 9 changes, while the target vehicle 9 approaches the motor vehicle 1. This is a challenge for a detector used for this, which has been trained with features of the front 11 in front view. As a result, a confidence value decreases, which is a measure of quality of the reliability of the detection. This is recognizable in FIG. 3 based on a gradient image 17, which shows edges 16 of the target vehicle 9. As is apparent from FIG. 3, the viewing angle changes with time such that a further detection of the target vehicle 9 based on the features of the front 11 cannot be reliably ensured anymore.

(22) In order to be able to further ensure the reliable tracking of the target vehicle 9, in a next step, a feature of the lateral flank 14 of the target vehicle 9 is extracted. However, this feature preferably is not searched in the entire image 10, but only in a region of interest 15, as it is illustrated in the image 10 according to FIG. 4, and which is determined depending on the bounding box 12, which is provided by the detection of the front 11. Thus, the bounding box 12 depends on the detection based on the feature of the front 11 and allows faster initialization of the tracking of the feature of the lateral flank 14.

(23) In the next step, the gradient image 17 is calculated from the region of interest 15, as it is shown in FIG. 3. For calculating the gradient image 17, the region of interest 15 of the image 10 is converted to a grey-scale image, smoothed with a Gaussian filter and for example processed with a Canny edge detector.

(24) The extraction of the feature of the lateral flank 14 is effected based on edges 16 of the gradient image 17. As a first feature of the lateral flank 14, a (here front) wheel arch 19 is chosen. The edges 16 of the wheel arch 19 are generalized with a generalization algorithm, in particular with a Douglas-Peucker algorithm. Generalized edges 18 result, as it is illustrated in FIG. 4. The generalization algorithm effects a reduction of the data volume, this results in increase of the computing speed and facilitates the detection of the wheel arch 19 because now wheel arches 19 of various models are covered and thus can be better compared.

(25) The detection of the (generalized) wheel arch 19 is identified with a rectangle 20 in FIG. 5.

(26) If the target vehicle 9 has now approached the motor vehicle 1 to the point as it is illustrated in FIG. 6, in a further stepafter tracking the wheel arch 19it is switched to tracking a wheel 21 of the target vehicle 9. The detection of the wheel 21 is effected with a description by a Hough transform, in particular a Hough circle transform. The Hough circle transform approximates a circle, which is generated by the edges 16, which arise due to the intensity difference between rim and tire or tire and background. An exemplary circle 22 in the image 10 shown in FIG. 7 exemplifies the result of the Hough circle transform.

(27) In tracking the lateral flank 14, temporarily, both features are trackedthe wheel arch 19 and the wheel 21until the target vehicle 9 is any time as close as it can be switched to exclusively tracking the wheel 21namely a front wheel 23 and/or a rear wheel 24. The shorter the distance between the target vehicle 9 and the motor vehicle 1 is, the clearer is the circular shape of the wheel 21 in the image 10. The requirement for the Hough transform is a predefined geometric shape, presently a circle. However, basically, other shapes such as for example ellipses are also conceivable.

(28) A change of the tracking of the front wheel 23 to the tracking of the rear wheel 24 is automatically effected if the target vehicle 9 is located next to the motor vehicle 1as shown in FIG. 8and the front wheel 23 is no longer in the field of view 4 of the camera 3. The description of the rear wheel 24 by the Hough circle transform is shown with a circle 25 in FIG. 9.

(29) The respective change of the features is effected depending on a prediction algorithm, in particular a Kalman filter. Hereby, the tracking of the new feature can be faster initialized and/or more precisely verified. A change between the features occurs first from features of the front 11 to features of the lateral flank 14 (first in particular to the wheel arch 19) and subsequently then from the wheel arch 19 to the wheel 21, in particular first to the front wheel 23 and then to the rear wheel 24. In a transitional region, it is provided to track the respective old and new features at the same time and to change the features depending on the respective confidence value.

(30) In case the target vehicle 9 travels past the motor vehicle 1 even more than in FIG. 8, it is provided to continue the tracking with a camera attached to the lateral flank of the motor vehicle 1 and/or to fall back on the above mentioned front camera.

(31) The change of the features can also be effected in reverse order than described based on the figures. This is the case if the motor vehicle 1 overtakes the target vehicle 9. Then, the first tracked feature is the rear of the target vehicle 9. Next, the rear wheel arch 19 is incorporated and tracked as the feature of the lateral flank 14. Subsequently, it is changed from the rear wheel arch 19 to the rear wheel 24 and then to the front wheel 23.

(32) Moreover, a preset threshold value for a relative distance between the motor vehicle 1 and the target vehicle 9 as further means for effecting a later feature detection so that if the distance of the approaching target vehicle 9 falls below the preset threshold value, the detection and/or tracking of the feature of the lateral flank 14 commences.

(33) Furthermore, the searched shape of the features is dependent on the position of the target vehicle 9. Due to a calibration of the camera system 2 and the exterior orientation resulting from it, the shape of the features of target vehicles 9 visible in the image 10 can be predicted depending on their current position. Thus, target vehicles 9 farther away have rather elliptical shapes of the wheel arches 19 and of the wheels 21 in the images 10, while closer target vehicles 9 have substantially round shapes of the wheel arches 19 and the wheels 21.

(34) In addition, lens characteristics of the camera system 2 are used to compensate for distortions in the field of view 4. This is for example particularly helpful in case of the used fish-eye lens.

(35) In addition, the detection of the target vehicle 9 is effected depending on the recognition of the lanes 7, 8 with a lane recognition system. It provides information about the course of the road, in particular curves, which is used to calculate the probability that the target vehicle 9 enters the blind spot area 13. Thereby, the risk emanating from the target vehicle 9 can also be inferred.

(36) The lane recognition system is also used in the case of multi-lane roads 6 in order to determine if the target vehicle 9 is in the blind spot area 13 distant by more than an adjacent lane 7, 8 from the motor vehicle 1 in order to thus prevent a false alarm. Otherwise, it is assumed that the motor vehicle 1 collides with the target vehicle 9 upon single lane change.

(37) Lane recognition systems and/or roadway marking recognition systems also allow determining a rate of movement of static objects in the image 10, for example of traffic signs and/or another infrastructure. This is helpful to recognize the erroneous detection of target vehicles 9 and to subsequently remove it.

(38) As an additional expansion, it can be provided that a trajectory or a travel course of the target vehicle 9 is recorded and then extrapolated with a prediction algorithm. The ulterior motive is in that a target vehicle 9 has to move in a certain manner due to its construction, thus, for example, lateral movement is not possible without longitudinal movement. The trajectory is used to render the tracking of the target vehicle 9 more robust and/or more accurate and to temporarily be able to further estimate the position of the target vehicle 9 with poor visibility conditions and/or partial coverings.

(39) It can also be provided that data from a CAN bus, for example speed and/or steering angle of the motor vehicle 1, are used to predict the future direction of travel of the motor vehicle 1 and to estimate when a target vehicle 9 will enter the blind spot area 13.

(40) In a further supplement, the tracking of the target vehicle 9 based on the feature of the lateral flank 14 can be used for organizing an overtaking operation safer. In this case, it is indicted to the driver of the motor vehicle 1 when he has completely passed the target vehicle 9 and can change to the lane in front of the target vehicle 9.

(41) In further embodiments the predetermined feature of the lateral flank 14 in addition or alternatively can also be a wing mirror or another feature of the lateral flank 14.