METHOD AND DEVICE FOR THE ESTIMATION OF CAR EGO-MOTION FROM SURROUND VIEW IMAGES

20170345164 · 2017-11-30

Assignee

Inventors

Cpc classification

International classification

Abstract

A method and device for determining an ego-motion of a vehicle are disclosed. Respective sequences of consecutive images are obtained from a front view camera, a left side view camera, a right side view camera and a rear view camera and merged. A virtual projection of the images to a ground plane is provided using an affine projection. An optical flow is determined from the sequence of projected images, an ego-motion of the vehicle is determined from the optical flow and the ego-motion is used to predict a kinematic state of the car.

Claims

1. A method for determining an ego-motion of a vehicle, the method comprising: recording a first sequence of consecutive images of a front view camera, a second sequence of consecutive images of a left side view camera, a third sequence of consecutive images of a right side view camera, and a fourth sequence of consecutive images of a rear view camera; merging the first sequence of consecutive images, the second sequence of consecutive images, the third sequence of consecutive images, and the fourth sequence of consecutive images to obtain a sequence of merged images; providing a virtual projection of the images of the sequence of merged images to a ground plane using an affine projection, thereby obtaining a sequence of projected images; determining an optical flow, based on the sequence of projected images, the optical flow comprising motion vectors of target objects in the surroundings of the vehicle; determining an ego-motion of the vehicle based on the optical flow; and predicting a kinematic state of the vehicle based on the ego-motion.

2. The method of claim 1, wherein the determination of the ego-motion comprises: deriving an angular velocity of the vehicle around an instantaneous center of curvature from the optical flow; and using the derived angular velocity to derive a velocity of the vehicle.

3. The method of claim 1, wherein the determination of the ego-motion comprises deriving a current position vector of a target object and a current velocity relative to the target object from a previous position of the target object, a previous velocity relative to the target object and an angular velocity with respect to a rotation around an instantaneous center of curvature.

4. The method of claim 1, wherein the determination of the ego-motion comprises: deriving an angular velocity of the vehicle around an instantaneous center of curvature from a wheel speed and a steering angle using an Ackermann steering model; and merging the determined ego-motion and the angular velocity of the vehicle in an incremental pose update.

5. The method of claim 1, wherein the determination of the ego-motion comprises: deriving motion vectors from the optical flow; and applying a RANSAC procedure to the motion vectors.

6. The method of claim 1, wherein the determination of the ego-motion comprises: deriving motion vectors from the optical flow; deriving vector of ego-motion from the motion vectors of the optical flow; and applying a prediction filter to the vector of ego-motion for predicting a future position of the vehicle.

7. The method of claim 6, wherein an input to the prediction filter is derived from one or more vectors of ego-motion and from one or more motion sensor values.

8. The method of claim 1, further comprising detecting image regions that correspond to objects that are not at a ground level and masking out/disregarding the detected image regions.

9. A computer program product for executing a method for determining an ego-motion of a vehicle, the method comprising: recording a first sequence of consecutive images of a front view camera, a second sequence of consecutive images of a left side view camera, a third sequence of consecutive images of a right side view camera, and a fourth sequence of consecutive images of a rear view camera; merging the first sequence of consecutive images, the second sequence of consecutive images, the third sequence of consecutive images, and the fourth sequence of consecutive images to obtain a sequence of merged images; providing a virtual projection of the images of the sequence of merged images to a ground plane using an affine projection, thereby obtaining a sequence of projected images; determining an optical flow, based on the sequence of projected images, the optical flow comprising motion vectors of target objects in the surroundings of the vehicle; determining an ego-motion of the vehicle based on the optical flow; and predicting a kinematic state of the vehicle based on the ego-motion.

10. An ego-Motion detection system for a motor vehicle comprising a computation unit, the computation unit comprising a first input connection for receiving data from a front view camera, a second input connection for receiving data from a right side view camera, a third input connection for receiving data from a left side view camera, a fourth input connection for receiving data from a rear view camera, the computation unit comprising a processing unit for: obtaining a first sequence of consecutive images from the front view camera, a second sequence of consecutive images from the left side view camera, a third sequence of consecutive images from the right side view camera, and a fourth sequence of consecutive images from the rear view camera; merging the first sequence of consecutive images, the second sequence of consecutive images, the third sequence of consecutive images, and the fourth sequence of consecutive images to obtain a sequence of merged images: providing a virtual projection of the images of the sequence of merged images to a ground plane using an affine projection thereby obtaining a sequence of projected images; determining an optical flow, based on the sequence of projected images, the optical flow comprising motion vectors of target objects in the surroundings of the vehicle; determining an ego-motion of the vehicle based on the optical flow; and predicting a kinematic state of the vehicle based on the ego-motion.

11. The ego-Motion detection system of claim 10, comprising a front view camera that is connected to the first input, a right side view camera that is connected to the second input, a left side view camera that is connected to the third input, a rear view camera that is connected to the fourth input.

12. A car with an ego-motion detection system of claim 11, wherein the front view camera is provided at a front side of the car, the right side view camera is provided at a right side of the car, the left side view camera is provided at a left side of the car, and the rear view camera is provided at a rear side of the car.

Description

DESCRIPTION OF DRAWINGS

[0067] FIG. 1 shows a car with a surround view system.

[0068] FIG. 2 illustrates a car motion of the car of FIG. 1 around an instantaneous center of rotation.

[0069] FIG. 3 illustrates a projection to a ground plane of an image point recorded with the surround view system of FIG. 1.

[0070] FIG. 4 illustrates in further detail the ground plane projection of FIG. 3.

[0071] FIG. 5 shows a procedure for deriving an ego-motion of the car.

[0072] Like reference symbols in the various drawings indicate like elements.

DETAILED DESCRIPTION

[0073] FIG. 1 shows a car 10 with a surround view system 11. The surround view system 11 includes a front view camera 12, a right side view camera 13, a left side view camera 14 and a rear view camera 15. The cameras 11-14 are connected to a CPU of a controller, which is not shown in FIG. 1. The controller is connected to further sensors and units, such as a velocity sensor, a steering angle sensor, a GPS unit, and acceleration and orientation sensors.

[0074] FIG. 2 illustrates a car motion of the car 10. A wheel base B of the car and a wheel track L are indicated. The car 10 is designed according to an Ackermann steering geometry in which an orientation of the steerable front wheels is adjusted such that all four wheels of a vehicle are oriented in tangential direction to a circle of instant rotation. An instantaneous center of curvature “ICC” is in register with the rear axis of the car 10 at a distance R, where R is the radius of the car's instant rotation with respect to the yaw movement.

[0075] A two-dimensional vehicle coordinate system is indicated, which is fixed to a reference point of the car and aligned along a longitudinal and a lateral axis of the car. A location of the instantaneous center of curvature relative to the vehicle coordinate system is indicated by a vector {right arrow over (P)}.sub.ICC.

[0076] In an Ackermann steering geometry according to FIG. 2, an angle a between an inner rear wheel, the instant center of curvature and an inner front wheel is equal to a steering angle α of the inner front wheel. Herein, “inner wheel” refers to the respective wheel that is closer to the center of curvature. A motion of the inner front wheel relative to a ground plane is indicated by a letter v.

[0077] FIG. 3 shows a projection of an image point to a ground plane 16. An angle of inclination θ relative to the vertical may be estimated from a location of the image point on the image sensor of the right side view camera 13. If the image point corresponds to a feature of the road, the location of the corresponding object point is the projection of the image point onto the ground plane. In the example of FIG. 3, the camera 13 has an elevation H above the ground plane. Consequently, the corresponding object point is located at a distance H*cos(θ) from the right side of the car 10.

[0078] In some implementations, the vehicle 10 is tracked using a constant acceleration model and a one-step procedure, where the values of a car position x_k and a car velocity v=d/dt(x_k) at time k*Δt are predicted using the respective values of the position and velocity at the earlier time (k−1)* Δt according to the equations


X.sub.k=X.sub.k−1+{dot over (X)}.sub.k−1*Δt   (1)


{dot over (X)}.sub.k=ω×X.sub.k−1+{dot over (X)}.sub.k−1*Δt   (2)


or


X.sub.k=X.sub.k−1+{dot over (X)}.sub.k−1   (1a)


{dot over (X)}.sub.k=ω×X.sub.k−1+{dot over (X)}.sub.k−1  , (2a)

for time units in which Δt=1. Herein, X.sub.k, X.sub.k−1 refer to positions of the car relative to the vehicle coordinate system of FIG. 2, which is fixed to the car 10, where the positions X.sub.k, X.sub.k−1 of the car 10 are evaluated at times k*Δt, (k−1)*Δt, respectively, and where a position of the vehicle coordinate system is evaluated at time (k−1)*Δt.

[0079] The car velocity at the reference point may be derived from the location of the reference point relative to the instantaneous center of curvature and the current angular velocity according to:


{right arrow over (V)}.sub.car=−{right arrow over (ω)}×{right arrow over (P)}.sub.ICC   (3)

where {right arrow over (ω)} is a vector of instantaneous rotation and {right arrow over (P)}.sub.ICC is the position of the instantaneous center of curvature relative to the vehicle coordinate system. The relationship according to equation (3) is used in equations (2), (2a). In equations (1)-(2a) the vector arrows have been omitted for easier reading.

[0080] A vehicle position X_k′ relative to a fixed reference frame, also known as “world coordinate system”, is derived from the vector X_k and a location R of the vehicle coordinate system relative to the fixed reference frame. By way of example, the movement of the vehicle coordinate system may be derived using GPS and/or other sensors, such as a wheel speed sensor, a steering angle sensor, acceleration and orientation sensors.

[0081] In some implementations, the accuracy is improved by incorporating a time dependent acceleration according to:


{umlaut over (X)}.sub.k=ζ×X.sub.k−1+ω×(ω×X.sub.k−1)+{umlaut over (X)}.sub.k−1  , (4)

where ζ is related or proportional to the time derivative of the angular rotation ω. The first term on the right hand side of equation (4) is also referred to as “Euler acceleration” and the second term is also referred to as “Coriolis acceleration”. Under the assumption that the car stays on track, the centrifugal acceleration is compensated by the car's tires and does not contribute to the vehicle motion.

[0082] In general, the angular velocity ω is time dependent. In some examples, the angular velocity ω at time (k−1)* Δt is used in a computation according to equations (2), (2a) or (3).

[0083] In some examples, a mean velocity v between times (k−2)*Δt and (k−1)*Δt can be derived from the comparison of two subsequent projections of camera images. In a first approximation, the mean velocity is used as the instant velocity at time (k−1)*Δt.

[0084] In some implementations, the angular velocity w is derived from the steering angles of the front wheels and a rotation speed of a front wheel using an Ackermann steering model. The Ackermann steering model gives a good approximation for a car steering with Ackermann geometry, especially for slow velocities when there is little or no slip between the tires and the road.

[0085] The steering angle of the front wheels may in turn be derived from an angular position of the steering column and the known lateral distance L between the front wheels. In some examples, the ego-motion, which is derived from the image sequences of the vehicle cameras is used to derive the angular velocity ω.

[0086] With reference to FIG. 2, a radius of curvature R_2 with respect to the instantaneous curvature center and an inner front wheel can be derived as R_2=B/sin(a), where α is a steering angle of the inner front wheel and B is the wheel base of the car. If the inner front wheel moves with a velocity v, which can be derived from a rotation speed of the inner front wheel and the wheel's diameter, the angular velocity of the instantaneous rotation of the car in a horizontal plane, also known as “yaw”, is ω=v/R_2=v*sin(α)/B.

[0087] For better accuracy, the instantaneous position may be computed using input from further odometric sensors, such as a GPS system, speed and acceleration sensors of the vehicle or other kinds of odometric sensors. For example, GPS position values can be used to correct a drift from the true position.

[0088] In some examples, the ego-motion is estimated from an affine projection or transformation to a ground plane, where the images of the cameras of the surround view system are merged into the projection to the ground plane.

[0089] FIGS. 3 and 4 show a projection to a ground plane 16. Under the assumption that an image point corresponds to an object point of an object on the ground plane the image point may be projected to a location of the corresponding object point on the ground plane. An angle θ of incidence is derived from a location of the image point on the camera sensor. A location Y of the projection is then derived using the height H of the camera sensor above street level as Y=H*cos(θ).

[0090] FIG. 4 shows an isometric view of the affine projection of FIG. 3. In FIG. 4, a point in a view port plane 17 is denoted by p=(u, v) and a corresponding point in the ground plane 16 is denoted by P=(X, Y). A distance between the view port plane 17 and a projection center C is denoted by the letter “f”.

[0091] In some implementations, the camera image is evaluated and the observed scene is reconstructed. In some examples, a sidewalk is detected and its height estimated. Stationary objects, such as a lamp post or a tree, may be detected and their orientation relative to the ground plane is estimated.

[0092] Objects which are not located at street level and/or which have a proper motion may distort the optic flow and lead to inaccuracies in the derived ego-motion. In some implementations, the optical flow vectors resulting from such objects are filtered out using a RANSAC (random sample consensus) procedure in which outliers are suppressed. In some examples, a road border is recognized using edge recognition and a digital map, which is stored in a computer memory of the car 10.

[0093] In some implementations, roll and a pitch motions of the car are determined, for example by using acceleration and/or orientation sensors of the car and the ego-motion vectors are corrected by subtracting or by compensating the roll and pitch motions.

[0094] According to further modifications, the derived ego-motion is used for a lane-keeping application or for other electronic stabilization applications.

[0095] FIG. 5 shows, by way of example, a procedure for obtaining an ego-motion. In a step 30, camera images are acquired from the cameras 11-16. The camera images are combined into a combined image in a step 31. In a step 32, an image area is selected for the determination of ego-motion. For example, image areas that correspond to objects outside a street zone, such as buildings and other installations, may be clipped. In a step 33, the image points are projected to a ground surface, for example, by applying an affine transformation or a perspective projection.

[0096] In a step 34, corresponding image points are identified in consecutive images. In a step 35, optical flow vectors are derived by comparing the locations of the corresponding image points, for example by computing the difference vector between the position vectors of the corresponding locations. In a step 36, a filter procedure is applied, such as a RANSAC procedure or other elimination of outliers and interpolation or by applying a Kalman filter. In some examples, the filtering may involve storing image values, such as image point brightness values, of a given time window in computer memory and computing an average of the image values. In a step 37, an ego-motion vector of the car is derived from the optical flow.

[0097] The particular sequence of the steps FIG. 5 is only provided by way of example. For example, the images of the cameras may also be combined after carrying out the projection to ground level.

[0098] A number of implementations have been described. Nevertheless, it will be understood that various modifications may be made without departing from the spirit and scope of the disclosure. Accordingly, other implementations are within the scope of the following claims.