System, vehicle and method for online calibration of a camera on a vehicle

09792683 · 2017-10-17

Assignee

Inventors

Cpc classification

International classification

Abstract

A camera mounted on or for a vehicle has camera rotational parameters φ, θ, ψ and camera translational parameters x.sub.c, y.sub.c, z.sub.c in a camera image coordinate system, and the vehicle has a vehicle coordinate system. A method for online or on-the-fly calibration of the camera involves two independent steps, namely while the vehicle is moving relative to the ground, calibrating the camera rotational parameters using a parallel geometrical calibration process, and calibrating at least some of the camera translational parameters x.sub.c, y.sub.c independently of the camera rotational parameters.

Claims

1. A method for calibration of a camera on a vehicle driving on a ground in an environment, wherein camera rotational parameters and camera translational parameters are defined in a camera image coordinate system for the camera, wherein a vehicle coordinate system is defined for the vehicle, and wherein the method comprises steps: while moving the vehicle relative to the ground, calibrating the camera rotational parameters using a parallel geometrical calibration process, determining that the vehicle is moving along a curved path, and while the vehicle is moving along the curved path, calibrating at least selected ones of the camera translational parameters independently of the camera rotational parameters, wherein the calibrating of the selected camera translational parameters comprises the calibrating steps: tracking at least two features of the environment in at least two successive images captured by the camera over a period of time, mapping the at least two features to a ground plane, fitting an arc to the at least two features mapped to the ground plane, calculating a first vehicle turning center in the camera image coordinate system from the arc, determining a vehicle steering angle, calculating a second vehicle turning center in the vehicle coordinate system from the vehicle steering angle, determining a difference between the first vehicle turning center in the camera image coordinate system and the second vehicle turning center in the vehicle coordinate system, determining, from the difference, an offset to be applied to the selected camera translational parameters so that a position of the first vehicle turning center in the camera image coordinate system will correspond to the position of the second vehicle turning center in the vehicle coordinate system, and applying the offset to the selected camera translational parameters.

2. The method according to claim 1, wherein the camera rotational parameters include three parameters φ, θ and ψ respectively relating to rotational orientations about three orthogonal axes, wherein the camera translational parameters include a longitudinal translational parameter x.sub.c, a transverse translational parameter y.sub.c, and a height parameter z.sub.c, and wherein the step of calibrating at least selected ones of the camera translational parameters comprises calibrating the longitudinal translational parameter x.sub.c and the transverse translational parameter y.sub.c.

3. The method according to claim 2, further comprising: while the vehicle is traveling in a straight line, tracking at least two longitudinal features of the environment in at least two successive images captured by the camera over a predetermined period of time to provide two recorded trajectories, and calibrating the height parameter z.sub.c by determining a camera height of the camera by recording a vehicle speed of the vehicle while tracking the longitudinal features, and determining a first distance that a respective one of the features has moved in the successive images in the predetermined period of time.

4. The method according to claim 3, wherein the calibrating of the height parameter z.sub.c further comprises determining a difference between the first distance and a predicted distance calculated from the vehicle speed, calculating an offset in the camera height from the difference, and adding the offset to a stored value of the camera height.

5. The method according to claim 1, comprising performing the calibrating of the selected camera translational parameters after performing the calibrating of the camera rotational parameters.

6. The method according to claim 1, wherein the calibrating of the selected camera translational parameters is performed only while the vehicle is moving along the curved path.

7. The method according to claim 1, wherein the selected camera translational parameters are calibrated by repeating the calibrating respectively while the vehicle is moving in respective curved paths curving counter-clockwise and curving clockwise.

8. The method according to claim 1, wherein the selected camera translational parameters are calibrated by repeating the calibrating respectively while the vehicle is moving in respective curved paths having differing radii.

9. The method according to claim 1, further comprising, at a different time than the moving along the curved path, determining that the vehicle is traveling in a straight line, and performing the calibrating of the camera rotational parameters using the parallel geometrical calibration process while the vehicle is traveling in the straight line.

10. The method according to claim 9, wherein the parallel geometrical calibration process comprises: while the vehicle is traveling in the straight line tracking at least two longitudinal features of the environment in at least two successive images captured by the camera over a period of time to provide two recorded trajectories, and adjusting the camera rotational parameters such that the two recorded trajectories are parallel to one another or respectively aligned with one another when mapped from the camera image coordinate system to the vehicle coordinate system.

11. A system for a vehicle, for performing the method according to claim 1, comprising a calibration unit that comprises: a first input configured and adapted to receive images from the camera on the vehicle, a memory configured and adapted to store the camera translational parameters and the camera rotational parameters, a second input configured and adapted to receive an actual vehicle steering angle, a third input configured and adapted to receive an actual vehicle speed, and a processing arrangement configured and adapted to perform the method for the online calibration of the camera.

12. The system according to claim 11, further comprising the camera and a display configured and adapted to display images from the camera.

13. A non-transitory computer-readable medium storing a computer program embodied thereon, that when executed on a processor, causes the processor to perform the method according to claim 1.

14. A method of calibrating a camera on a vehicle, wherein a vehicle coordinate system is defined relative to the vehicle, wherein a longitudinal translational parameter, a transverse translational parameter and rotational parameters are defined in a camera image coordinate system for the camera, and the method comprises steps: a) while the vehicle is moving, obtaining first image data from the camera; b) evaluating the first image data and performing a parallel geometrical calibration process based thereon, to thereby calibrate the rotational parameters of the camera to the vehicle coordinate system; c) while the vehicle is moving along a curved path, obtaining second image data from the camera; and d) evaluating the second image data and performing a separate calibration process based thereon, to thereby calibrate the longitudinal translational parameter and the transverse translational parameter of the camera to the vehicle coordinate system, wherein the separate calibration process is performed separately and independently of the parallel geometrical calibration process and separately and independently of the rotational parameters, and wherein the separate calibration process for calibrating the longitudinal and transverse translational parameters comprises the calibrating steps: tracking at least two environment features in at least two successive images of the second image data, mapping the at least two environment features to a ground plane, fitting an arc to the at least two environment features mapped to the ground plane, calculating a first turning center of the vehicle in the camera image coordinate system from the arc, determining a steering angle of the vehicle, calculating a second turning center of the vehicle in the vehicle coordinate system from the steering angle, determining a difference between the first turning center in the camera image coordinate system and the second turning center in the vehicle coordinate system, determining, from the difference, an offset to be applied to the longitudinal and transverse translational parameters so that a position of the first turning center in the camera image coordinate system will correspond to the position of the second turning center in the vehicle coordinate system, and applying the offset to the longitudinal and transverse translational parameters.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) In order that the invention may be clearly understood, it will now be described in connection with example embodiments thereof, with reference to the accompanying drawings, wherein:

(2) FIGS. 1A and 1B illustrate schematic side and top views of a vehicle and a vehicle coordinate system;

(3) FIG. 2 illustrates a schematic side view of a vehicle including cameras and a system for calibrating the cameras;

(4) FIG. 3 illustrates a schematic block diagram of a method for online calibration of a camera on a vehicle;

(5) FIG. 4 illustrates a schematic diagram for explaining a method or process for calibrating camera translational parameters; and

(6) FIG. 5 illustrates a schematic diagram for explaining a method or process for calibrating camera rotational parameters.

DETAILED DESCRIPTION OF PREFERRED EXAMPLE EMBODIMENTS AND OF THE BEST MODE OF THE INVENTION

(7) FIG. 1 including FIGS. 1A and 1B illustrates schematic side and top views of a vehicle 1 and a vehicle coordinate system x.sub.v, y.sub.v, z.sub.v. The vehicle coordinate system is a Cartesian coordinate system with a datum which, in this embodiment, is taken as the center of the front axle. In the vehicle coordinate system, the direction x.sub.v extends in the direction of the length of the vehicle, the direction y.sub.v in the direction of the width of the vehicle and the direction z.sub.v in the direction of the height of the vehicle. Due to the datum being positioned in the center of the front axle, −y.sub.v extends to the left of the vehicle, +y.sub.v extends to the right of the vehicle, −x.sub.v is forward of the front axle, +x.sub.v is rearward of the front axle, +z.sub.v is above the front axle and −z.sub.v is below the front axle. Opposite orientations of one or more of the directions are possible in other examples, e.g. if the vehicle in FIG. 1 is reversed or flipped lengthwise so it faces with its front end to the right, whereby +x.sub.v would be forward of the front axle.

(8) FIG. 2 illustrates a schematic side view of a vehicle 1 including four cameras of which three 2, 3, 4 are illustrated or visible in this view of the vehicle 1. Camera 2 is positioned to capture the environment forward of the vehicle, camera 3 is positioned to capture the environment facing the left side of the vehicle, camera 4 is positioned to capture the environment rearward of the vehicle and a non-illustrated camera is positioned on the right-hand side of the vehicle to capture the environment to the right of the vehicle 1.

(9) The cameras have a wide field of view so that a complete 360° image can be captured of the immediate vicinity of the vehicle 1. However, the methods described herein, may be used for cameras having a smaller field of view, and/or may be used for a vehicle including only a single camera or more or fewer than four cameras.

(10) A camera has intrinsic parameters and extrinsic parameters. The camera extrinsic parameters describe a camera image coordinate system which includes three rotational parameters φ, θ, ψ and three translational parameters x.sub.c, y.sub.c, z.sub.c. The three translational parameters x.sub.c, y.sub.c, z.sub.c are illustrated in FIG. 2.

(11) It is desirable that these parameters of the camera image coordinate system correspond to those of the real world, for example, to those of the vehicle coordinate system so that the positions of features captured in images by the camera (and respectively by all of the plural cameras) can be accurately mapped to their positions in the real world (and preferably in the vehicle coordinate system). This mapping may be used for detecting objects in the vicinity of the vehicle, for example hazards. This information may be used to warn the driver of the vehicle or may be used by driver assistance systems so that the vehicle driver assistance system automatically takes appropriate action to avoid the hazard.

(12) The rotational parameters φ, θ, ψ and translational parameters x.sub.c, y.sub.c, z.sub.c of the camera extrinsics may be calibrated when the camera is mounted on the vehicle in the factory. However, the camera extrinsics may vary over time, for example because the position of the camera on the vehicle slightly changes, or a camera may require calibrating as it has been fitted to the vehicle after its production. One way of calibrating the camera is to calibrate the camera while the camera is moving and, since it is mounted on the vehicle, while the vehicle is moving, using an online or on-the-fly calibration method.

(13) The vehicle 1 includes a system 5 including a calibration unit 6 having a first input 7 coupled to the cameras so that it is able to receive images from these cameras and a display 11 for displaying images from the cameras. The calibration unit 6 further includes a second input 8 for receiving the actual (e.g measured or sensed) vehicle steering angle and a third input 9 for receiving the actual (e.g. measured or sensed) vehicle speed. The vehicle steering angle data and vehicle speed data may be provided in any known manner, e.g. from sensors or systems typically already present in a conventional vehicle, or by additional sensors provided to sense these data values. The calibration unit 6 further includes a processing arrangement 10 for the online calibration of at least one of the cameras 2, 3, 4 mounted on the vehicle 1.

(14) FIG. 3 illustrates a schematic block diagram for explaining a method 20 for online calibration of a camera on a vehicle, such as one of the cameras 2, 3, 4 mounted on the vehicle 1 illustrated in FIG. 2, according to a first embodiment.

(15) While the vehicle is moving relative to the ground, the camera rotational parameters φ, θ, ψ are calibrated using a parallel geometrical calibration method 21, such as the parallel geometrical calibration method disclosed in WO 2012/143036 A1, in a first step. The camera longitudinal and transverse translational parameters x.sub.c, y.sub.c are calibrated in a second step 22, independently of the camera rotational parameters φ, θ, ψ and independently of the method 21 for calibrating the camera rotational parameters.

(16) The parallel geometrical calibration method 21 may be carried out first and the second step 22 is carried out afterwards. Therefore, after the rotational parameters φ, θ, ψ of the camera are calibrated, a further calibration is carried out to calibrate the camera translational parameters x.sub.c, y.sub.c. However, alternatively the two calibrating processes may be carried out in the opposite order, or simultaneously with one another, or partially overlapping with one another in either order.

(17) In the calibration method 22, the camera longitudinal and transverse translational parameters x.sub.c, y.sub.c are calibrated by determining a difference between the camera translational parameters relative to the vehicle coordinate system and, therefore, relative to the real world. If a difference is determined, an offset is applied to the translational parameters x.sub.c, y.sub.c in the camera image coordinate system to compensate for (e.g. zero-out) the difference.

(18) At least some of the camera translational parameters x.sub.c, y.sub.c (and optionally z.sub.c) are calibrated while the vehicle is moving along a curved path. Consequently, the second step 22 includes receiving data 23 from the vehicle, for example from a vehicle control unit, about the vehicle speed and steering wheel angle, in order to determine that the vehicle is moving relative to the ground and in a curved path. Images are received from the camera in step 24 and the images and data from the vehicle are recorded in step 25. In step 26, at least two features within the first image are identified. A subsequent image collected by the same camera is further analyzed to determine the position of the at least two features therein, and the position of these features is tracked over two or more successive images captured by the camera over a period of time. In this specification, the word “successive” is not limited to “directly successive” or “directly consecutive”, and thus also encompasses a sequence of two images that occur one after another but can have one or more other images occurring therebetween. This data (e.g. the results of the analysis of the moving positions of the tracked features in the successive images) is used to identify the vehicle turning center T.sub.c in the camera image coordinate system and the vehicle turning center T.sub.v in the vehicle coordinate system.

(19) FIG. 4 illustrates a schematic diagram of a method for determining the vehicle turning center T.sub.c in the camera image coordinate system and the vehicle turning center T.sub.v in the vehicle coordinate system. A vehicle 1 with a rear camera 3 is illustrated, in a scenario in which the vehicle 1 has moved along a curved path. In the illustrated example, the position of two features 30, 31 relative to the ground for five images is determined, a respective arc 32, 33 is fitted to each of the trajectories formed by the successive images of the two features 30, 31 and the turning center T.sub.c of the vehicle in the camera coordinate system is calculated from these arcs 32, 33.

(20) The actual (e.g. sensed or measured) data from the vehicle regarding the steering wheel angle is used to calculate the vehicle turning center T.sub.v in the vehicle coordinate system. The respective positions of the vehicle turning center T.sub.c in the camera image coordinate system and the vehicle turning center T.sub.v in the vehicle coordinate system are compared and, if there is a difference 34 between them, then this difference is used to calculate an offset 35 which is applied (e.g. added) to the x.sub.c, y.sub.c parameters of the camera in order to shift the coordinates of T.sub.c (plus the offset) to match the coordinates of T.sub.v, thus calibrating the x.sub.c, y.sub.c translational parameters of the camera.

(21) Returning to FIG. 3, in the parallel geometrical method 21 used to calibrate the camera rotational parameters φ, θ, ψ, data from the cameras is collected in step 24 and data from the vehicle is collected while the vehicle is traveling in a straight line in step 25. Thus, step 25 can be regarded as separate for the two independent calibration processes, or can be regarded as a combined or composite or omnibus data gathering and recording step, which may be performed using a single memory or separate memories for the two independent calibration processes.

(22) If the vehicle is moving relative to the ground and in a straight line, the online calibration of the camera rotational parameters may be performed by tracking two or more longitudinal features in at least two successive images captured by the camera over a period of time. The parallel geometrical calibration method disclosed in WO 2012/143036 A1 may be used, for example.

(23) The camera rotational parameters φ, θ, ψ are adjusted (e.g. shifted or supplemented with an offset as necessary) so that the recorded trajectories of the longitudinal features are parallel when mapped from the camera image coordinate system to the vehicle coordinate system. This is illustrated schematically in FIG. 5.

(24) FIG. 5 illustrates a schematic view of images captured by two cameras mounted on a vehicle 40. In the image 41 captured by the front camera 45, two longitudinal features 42, 43, for example a lane marking on the road, are indicated. In the image 44 captured by the rear camera 46, the same two longitudinal features 42′, 43′ are captured. It can be seen that there is a difference 47 between the position of the longitudinal features 42, 43 captured in the image 41 and the same longitudinal features 42′, 43′ captured in the image 44. Because the two cameras 45, 46 captured the same longitudinal features (based on an image analysis of the images of the two cameras), the positions of these longitudinal features should be the same in both images 41, 44. Therefore, the difference 47 is used to determine a contrary offset to adjust the camera rotational parameters (e.g. to add to the rotational parameters of the camera 46) so that the positions of the longitudinal features 42, 43 is respectively the same in both images 41, 44.

(25) The height z.sub.c of the camera may also be determined using the parallel geometrical calibration method by recording the speed of the vehicle while tracking the longitudinal features. This enables the distance that a feature has moved in the camera image in a predetermined period of time to be determined. Using triangulation, the height z.sub.c of the camera is determined.

(26) The height z.sub.c of the camera may also be calibrated by determining any existing difference between the distance measured from the camera image and a predicted distance calculated from the vehicle speed. If a difference is determined, this is used to calculate an offset which is added to a stored value of camera height.

(27) To summarize, the camera longitudinal and transverse translational parameters x.sub.c, y.sub.c are calibrated in a method step separate from a calibration of the camera rotational parameters φ, θ, ψ in order to avoid data ambiguity due to attempting to solve multiple dependent variables simultaneously, which may occur if the rotational parameters and translational parameters are calibrated in a single process or method. Any prior art parallel geometrical calibration process or method, such as the method disclosed in WO 2012/143036 A1, may be used to calibrate the camera rotational parameters φ, θ, ψ and, optionally also the camera height z.sub.c.

(28) Although the invention has been described with reference to specific example embodiments, it will be appreciated that it is intended to cover all modifications and equivalents within the scope of the appended claims. It should also be understood that the present disclosure includes all possible combinations of any individual features recited in any of the appended claims. The abstract of the disclosure does not define or limit the claimed invention, but rather merely abstracts certain features disclosed in the application.