Method and device for estimating an inherent movement of a vehicle

10755113 ยท 2020-08-25

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for estimating an inherent movement of a vehicle. The method includes a step of classifying, a step of detecting, and a step of ascertaining. In the step of classifying, at least one portion of a camera image representing a classified object is classified into an object category which represents stationary objects. In the step of detecting, at least one detection point of the portion in the camera image classified into the object category is detected in the camera image by utilizing a detection algorithm. In the step of ascertaining, an estimated inherent movement of the vehicle is ascertained by utilizing the detection point.

Claims

1. A method for estimating an inherent movement of a vehicle, the method comprising: classifying at least one portion of a camera image representing a classified object into an object category which represents stationary objects; detecting at least one detection point of the portion in the camera image classified into the object category by utilizing a detection algorithm; and ascertaining an estimated inherent movement of the vehicle by utilizing the detection point.

2. The method as recited in claim 1, wherein, in the step of classifying, at least one further portion of the camera image representing one further classified object is classified into one further object category which represents moving objects, the steps of detecting and of ascertaining being carried out independently of the further object category.

3. The method as recited in claim 1, wherein, in the step of classifying, the portion of the camera image representing the classified object is classified into the object category when the object has been classified as at least one of a roadway surface, a streetlight, a building, a road sign, and/or vegetation.

4. The method as recited in claim 1, wherein, in the step of detecting, the at least one detection point is detected by utilizing a detection algorithm which is designed for detecting at least one of a corner, an edge, and/or a brightness difference of the portion in the camera image as the detection point.

5. The method as recited in claim 1, wherein, in the step of classifying, at least one additional portion of the camera image representing an additional classified object is classified into an additional object category which represents movable objects.

6. The method as recited in claim 5, further comprising: recognizing, in which a movement state of the additional classified object is recognized.

7. The method as recited in claim 5, wherein the additional portion representing the additional classified object is classified into the object category when, in the step of recognizing, the movement state is recognized as being unmoved or is classified into the further object category when the movement state is recognized, in the step of recognizing, as being moved.

8. The method as recited in claim 1, further comprising: controlling a driver assistance system by utilizing the estimated inherent movement of the vehicle.

9. A device for estimating an inherent movement of a vehicle, the device designed to: classify at least one portion of a camera image representing a classified object into an object category which represents stationary objects; detect at least one detection point of the portion in the camera image classified into the object category by utilize a detection algorithm; and ascertain an estimated inherent movement of the vehicle by utilizing the detection point.

10. A non-transitory machine-readable memory medium on which is stored a computer program for estimating an inherent movement of a vehicle, the computer program, when executed by a computer, causing the computer to perform: classifying at least one portion of a camera image representing a classified object into an object category which represents stationary objects; detecting at least one detection point of the portion in the camera image classified into the object category by utilizing a detection algorithm; and ascertaining an estimated inherent movement of the vehicle by utilizing the detection point.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 shows a method for estimating an inherent movement of a vehicle.

(2) FIG. 2 shows a device for estimating an inherent movement of a vehicle according to one exemplary embodiment.

(3) FIG. 3 shows a flow chart of a method for estimating an inherent movement of a vehicle according to one exemplary embodiment.

DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS

(4) In the description below of exemplary embodiments of the present approach, identical or similar reference numerals are used for the similarly functioning elements represented in the different figures, a repeated description of these elements being dispensed with.

(5) If an exemplary embodiment includes an and/or linkage between a first feature and a second feature, this is intended to be read that the exemplary embodiment according to one specific embodiment includes both the first feature and the second feature and, according to a further specific embodiment, includes either only the first feature or only the second feature.

(6) FIG. 1 shows a device 100 for estimating an inherent movement of a vehicle.

(7) In a typical visual odometry shown here, device 100 selects the most distinctive points in a camera image 105, i.e., for example, corners, edges, and high brightness differences, in this case. In typical road scenes, these points frequently lie on additional moving vehicles, as is shown here. Device 100 shown here utilizes a method which is based on the assumption that a rigid scene takes up a preferably large portion of camera image 105, while other moving objects, however, take up only a small portion of camera image 105.

(8) In the method of device 100 carried out here, errors, in particular, also occur during the estimation of the inherent movement when a better part of camera image 105 belongs to a moving object, for example, due to a preceding truck or at least one preceding other vehicle.

(9) FIG. 2 shows a device 200 for estimating an inherent movement of a vehicle according to one exemplary embodiment. Camera image 105 may be camera image 105 shown in FIG. 1.

(10) In contrast to the conventional device shown in FIG. 1, device 200 presented here includes a classification unit 205, a detection unit 210, and an ascertainment unit 215.

(11) Classification unit 205 is designed for classifying at least one portion of camera image 105 representing a classified object into an object category 220 which represents stationary objects.

(12) Detection unit 210 is designed for detecting at least one detection point of the portion in camera image 105 classified into object category 220 by utilizing a detection algorithm. Ascertainment unit 215 is designed for ascertaining an estimated inherent movement of the vehicle by utilizing the detection point.

(13) According to this exemplary embodiment, classification unit 205 classifies those portions of camera image 105 representing classified objects into object category 220, in the case of which the classified objects have been classified as a roadway surface 225 and/or road signs 230 and/or vegetation 235.

(14) According to this exemplary embodiment, classification unit 205 is also designed for classifying at least one further portion of camera image 105 representing one further classified object into one further object category 240 which represents moving objects, detection unit 210 and ascertainment unit 215 operating or carrying out steps independently of further object category 240.

(15) According to this exemplary embodiment, classification unit 205 is also designed for classifying an additional portion of camera image 105 representing an additional classified object into an additional object category which represents stationary objects. According to this exemplary embodiment, classification unit 205 classifies additional portions of camera image 105 representing a plurality of additional classified objects into the additional object category, in the case of which the additional classified objects represent further vehicles 245.

(16) According to this exemplary embodiment, device 200 includes a recognition unit which is designed for recognizing a movement state of the additional objects, i.e., further vehicles 245 in this case.

(17) According to this exemplary embodiment, the recognition unit recognizes that further vehicles 245 are moving and, in response thereto, classifies the additional portions into further object category 240. When, according to one alternative exemplary embodiment, further vehicles 245 are recognized by the recognition unit as unmoved, the additional portions are classified into object category 220 in response thereto.

(18) According to this exemplary embodiment, device 200 also includes a masking unit 250 which is designed for masking out, in camera image 105, the further portions as well as those additional portions which have been classified into further object category 240.

(19) According to this exemplary embodiment, detection unit 210 detects the detection points by utilizing a detection algorithm which is designed for detecting a corner and/or an edge and/or a brightness difference of the portion in camera image 105 as the at least one detection point.

(20) Details of device 200 are described once more in greater detail in the following:

(21) Device 200 presented here allows for a camera-based movement estimation with consideration for semantic information, for automated or highly automated driving.

(22) A precise estimation of the inherent movement of a vehicle on the basis of camera images 105 from a vehicle camera of the vehicle is made possible. As compared to conventional methods, the method according to the present invention, which is implementable by device 200, is also capable of yielding a precise estimation when a better part of the scene represented in camera image 105 is made up of moving objects, for example, in a traffic jam or in a pedestrian zone.

(23) One feature of the present invention is a combination of a semantic segmentation with visual odometry. In this case, camera image 105 is initially classified, for example, into the classes pedestrian, vehicle, streetlight, building, roadway surface, and vegetation. Thereafter, distinctive points, the detection points in this case, in camera image 105 are detected as in conventional visual odometry. In contrast to conventional devices, device 200 only searches for points in areas, however, which belong to object classes which are reliably unmoving, i.e., according to this exemplary embodiment, are only on roadway surface 225, on road signs 230, or vegetation 235, or, according to one alternative exemplary embodiment, on streetlights and/or on buildings. In this way, an incorrect estimation of the inherent movement based on moving objects, such as further vehicles 245 in this case, may be reliably ruled out.

(24) According to this exemplary embodiment, all potentially moving or movable object classes are initially masked out with the aid of a semantic segmentation. Distinctive points are sought again in the remaining image. A total number of the distinctive points in FIG. 1 and of the detection points in FIG. 2 is identical, although, in FIG. 2, all detection points advantageously lie on static or stationary objects.

(25) FIG. 3 shows a flow chart of a method 300 for estimating an inherent movement of a vehicle according to one exemplary embodiment. This may be a method 300 which is controllable or implementable on the basis of the device described with reference to FIG. 2.

(26) Method 300 includes a step 305 of classifying, a step 310 of detecting, and a step 315 of ascertaining. In step 305 of classifying, at least one portion of a camera image representing a classified object is classified into an object category which represents stationary objects. In step 310 of detecting, at least one detection point of the portion in the camera image classified into the object category is detected in the camera image by utilizing a detection algorithm. In step 315 of ascertaining, an estimated inherent movement of the vehicle is ascertained by utilizing the detection point.

(27) The exemplary embodiments described in the following and one additional step 320 of recognizing are optional.

(28) According to this exemplary embodiment, in step 305 of classifying, at least one further portion of the camera image representing one further classified object is classified into one further object category which represents moving objects, steps 310, 315 of detecting and of ascertaining being carried out independently of the further object category.

(29) According to this exemplary embodiment, in step 305 of classifying, the portion of the camera image representing the classified object may be classified into the object category when the object has been classified as a roadway surface and/or a streetlight and/or a building and/or a road sign and/or vegetation.

(30) In step 310 of detecting, the at least one detection point may be detected by utilizing a detection algorithm which is designed for detecting a corner and/or an edge and/or a brightness difference of the portion in the camera image as the detection point.

(31) According to this exemplary embodiment, in step 305 of classifying, at least one additional portion of the camera image representing an additional classified object is classified into an additional object category which represents movable objects.

(32) In optional step 320 of recognizing, a movement state of the additional classified object is recognized. In response to step 320 of recognizing, the additional portion representing the additional classified object is classified into the object category when, in step 320 of recognizing, the movement state is recognized as being unmoved or is classified into the further object category when the movement state is recognized, in step 320 of recognizing, as being moved.

(33) The method steps presented here may be repeated and may be carried out in a sequence other than that described.