METHOD FOR CALIBRATING THE ALIGNMENT OF CAMERAS
20230230281 ยท 2023-07-20
Inventors
Cpc classification
G06T7/80
PHYSICS
International classification
Abstract
A method for calibrating a multiplicity of cameras on a vehicle in a common coordinate system. The method includes: a) acquiring camera images using each camera of the multiplicity of cameras, b) ascertaining overlap regions of camera images acquired in step a), c) setting up a common optimization function, which describes the alignments of each camera of the multiplicity of cameras as a target variable of the optimization, based on the overlap regions, d) solving the common optimization function set up in step c) to ascertain the alignments of each camera.
Claims
1. A method for calibrating a multiplicity of cameras on a vehicle in a common coordinate system, the method comprising the following steps: a) acquiring camera images using each camera of the multiplicity of cameras; b) ascertaining overlap regions of the camera images acquired in step a); c) setting up a common optimization function, which describes alignments of each camera of the multiplicity of cameras as a target variable of the optimization, based on the overlap regions; and d) solving the common optimization function set up in step c) to ascertain the alignments of each camera of the multiplicity of cameras.
2. The method as recited in claim 1, wherein the common coordinate system is a vehicle coordinate system.
3. The method as recited in claim 1, wherein a rectification of the camera images is implemented.
4. The method as recited in claim 1, wherein the alignment of each camera of the multiplicity of cameras is described by a rotation matrix, which describes the alignment of the camera relative to the common coordinate system.
5. The method as recited in claim 1, wherein the optimization function includes a system matrix, which describes relative alignment of the cameras relative to one another in the form of variables that are identified with a minimum error when the optimization function is solved in step d).
6. The method as recited in claim 1, wherein an optimization, which is at least of the second order, is performed to solve the optimization function.
7. The method as recited in claim 1, wherein a Gauss-Newton algorithm is used to solve the optimization function.
8. The method as recited in claim 1, wherein the multiplicity of cameras forms a camera circle in which the cameras cover at least a portion of an environment of the vehicle.
9. The method as recited in claim 1, wherein absolute alignment information is inserted into the common coordinate system for at least one camera of the multiplicity of cameras when setting up the optimization function.
10. The method as recited in claim 1, wherein pixels in the overlap region of different camera images are allocated to one another when setting up the common optimization function in step c).
11. A non-transitory machine-readable memory medium on which is stored a computer program for calibrating a multiplicity of cameras on a vehicle in a common coordinate system, the computer program, when executed by a computer, causing the computer to perform the following steps: a) acquiring camera images using each camera of the multiplicity of cameras; b) ascertaining overlap regions of the camera images acquired in step a); c) setting up a common optimization function, which describes alignments of each camera of the multiplicity of cameras as a target variable of the optimization, based on the overlap regions; and d) solving the common optimization function set up in step c) to ascertain the alignments of each camera of the multiplicity of cameras.
12. A control device configured to calibrate a multiplicity of cameras on a vehicle in a common coordinate system, the control device configured to: a) acquire camera images using each camera of the multiplicity of cameras; b) ascertain overlap regions of the camera images acquired in step a); c) set up a common optimization function, which describes alignments of each camera of the multiplicity of cameras as a target variable of the optimization, based on the overlap regions; and d) solve the common optimization function set up in step c) to ascertain the alignments of each camera of the multiplicity of cameras.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0095]
[0096]
[0097]
[0098]
[0099]
DETAILED DESCRIPTION OF EXAMPLE EMBODIMENTS
[0100]
[0101]
[0102] By way of example,
[0103]
[0104] In block 110, each camera 1, 2 of the multiplicity of cameras 1, 2 acquires camera images according to step a).
[0105] A rectification of the camera images may particularly be implemented in this context, for instance in such a way that virtual images are generated from original or real images (see
[0106] In block 120, according to step b), overlap regions 4 of camera images acquired in step a) are ascertained.
[0107] In block 130, according to step c), a common optimization function is set up with the aid of overlap regions 4, which describes the alignments of each camera 1, 2 of the multiplicity of cameras 1, 2 as a target variable of the optimization.
[0108] For example, when setting up the optimization function for at least one camera 1 of the multiplicity of cameras 1, 2, absolute alignment information may be inserted in the common coordinate system.
[0109] As a further example, pixels in overlap region 4 of different (real or virtual) camera images are able to be allocated to one another when setting up the common optimization function in step c).
[0110] In block 140, according to step d), the common optimization function set up in step c) is solved to ascertain the alignments of each camera 1, 2.
[0111] To solve the optimization function, an optimization of at least the second order, simply by way of example, may be carried out. However, as an especially advantageous but likewise merely exemplary embodiment variant, a Gauss-Newton algorithm may be used to solve the optimization function.
[0112] For instance, the alignment of each individual camera 1, 2 is able to be described by a rotation matrix, which describes the alignment of camera 1, 2 in relation to the common coordinate system.
[0113] By way of example, the optimization function may include a system matrix, which describes the relative alignment of cameras 1, 2 relative to one another in the form of variables that are able to be identified with a minimal error when the optimization function is solved in step d).
[0114]
[0115] Following the rectification (below), the images are clearly more similar, which is better for the subsequent image processing. For example, this is better for calculating the optical flow. Since these images are slightly rotated during the rectification, this can also be taken into account in the modeling of the calibration problem. The new approach described here simplifies the estimation of the calibration considerably and makes it more robust insofar as it estimates the real camera calibration and operates only indirectly on the virtual cameras.
[0116]
[0117]
[0118] The described algorithm is preferably implemented in software. For instance, the advantage of the described method in comparison with a separate calibration of each camera can be recognized once a complete online calibration is performed (detectable by the diagnosis output). Subsequently, precisely one single side camera is deliberately rotated by a few degrees (approximately 3 to 5 degrees) in different directions so that the camera is now decalibrated. If conventional methods are then used for an individual calibration of cameras, this can be detected in that not only the camera itself, but also adjacent cameras no longer exhibit the same value as during the first calibration process because the error of 3 to 5 degrees propagates further and thus is distributed to other cameras. In contrast, due to the method described here, no difference can be detected if the introduced method is used, which discovers through the fusion of multiple cameras precisely which camera is decalibrated and automatically recalibrates it.