METHOD FOR HARMONIZING IMAGES ACQUIRED FROM NON OVERLAPPING CAMERA VIEWS
20230064558 · 2023-03-02
Assignee
Inventors
- Mark Griffin (Troy, MI, US)
- Aidan Casey (County Galway, IE)
- Emre Turgay (County Galway, IE)
- Alex Perkins (Troy, MI, US)
Cpc classification
B60R2300/202
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/303
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/8066
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/50
PERFORMING OPERATIONS; TRANSPORTING
G06V20/588
PHYSICS
G06V20/56
PHYSICS
B60R2300/20
PERFORMING OPERATIONS; TRANSPORTING
G06V10/25
PHYSICS
B60Y2200/147
PERFORMING OPERATIONS; TRANSPORTING
H04N9/646
ELECTRICITY
H04N7/181
ELECTRICITY
B60R1/00
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/307
PERFORMING OPERATIONS; TRANSPORTING
B62D13/06
PERFORMING OPERATIONS; TRANSPORTING
B60R2300/304
PERFORMING OPERATIONS; TRANSPORTING
B60R1/22
PERFORMING OPERATIONS; TRANSPORTING
H04N23/90
ELECTRICITY
International classification
G06V10/75
PHYSICS
G06V20/56
PHYSICS
H04N9/68
ELECTRICITY
Abstract
An image processing method for harmonizing images acquired by a first camera and a second camera connected to a vehicle and arranged in such a way as their fields of view cover a same road space at different times as the vehicle travels along a travel direction is disclosed. The method includes: acquiring by a selected camera, a first image at a first time; selecting a first region of interest bounding a road portion from the first image; sampling the first region of interest; acquiring by the other camera, a second image in such a way that the road portion is included in a second region of interest; sampling the second region of interest; and determining one or more correction parameters for harmonizing images acquired by the first and second cameras, based on a comparison between the image content of the first and second regions of interest.
Claims
1. An image processing method for harmonizing images acquired by a first camera and a second camera connected to a vehicle and arranged in such a way as their fields of view cover a same road space at different times as the vehicle travels along a travel direction, the method comprising: determining the travel direction of the vehicle; based on the travel direction, selecting one of the first and second cameras to acquire a first image; acquiring, by the selected camera, the first image at a first time; selecting at least one first region of interest of the first image, the first region of interest potentially bounding a road portion; sampling the at least one first region of interest from the first image; monitoring a distance travelled by the vehicle along the travel direction after the first time to determine a second time to acquire, by the other camera, a second image, in such a way that the potential road portion is included in a second region of interest of the second image; acquiring, by the other camera, the second image at the second time; sampling the second region of interest from the second image; verifying whether both the first and second regions of interest include a road portion; and responsive to a verification that both the first and second regions of interest include the road portion, determining one or more correction parameters for harmonizing images acquired by the first and second cameras, based on a comparison between the image content of the first and second regions of interest.
2. The method of claim 1, wherein the first camera is a rear camera of the vehicle and the second camera is a rear camera of a trailer connected to the vehicle via connecting means.
3. The method of claim 2, where in response to determining that the vehicle is travelling in a forward direction, selecting the first camera to acquire the first image at the first time, and wherein the or each at least one first region of interest of the first image corresponds to a region known to comprise a road space between the trailer and the vehicle.
4. The method of claim 3, wherein the or each at least one first region of interest of the first image is beside a region bounding the connecting means.
5. The method of claim 4, wherein selecting the at least one first region of interest comprises: defining at least two first regions of interest in the first image, one at a first side of the region bounding the connecting means, and the other at a second side of the region bounding the connecting means, when the trailer is aligned to the vehicle along a longitudinal axis; determining whether any of the two first regions of interest includes a portion of at least one of the trailer and the connecting means; responsive to a determination that none of the two first regions of interest include a portion of at least one of the trailer and the connecting means, selecting both the two first regions of interest to be sampled from the first image; and responsive to a determination that one of the two first regions of interest includes a portion of at least one of the trailer and the connecting means, selecting the other first region of interest to be sampled from the first image.
6. The method of claim 5, wherein determining whether any of the first two regions of interest includes a portion of at least one of the trailer and the connecting means comprises: detecting whether a portion of at least one of the trailer and the connecting means is included in any of the first two regions of interest.
7. The method of claim 5, comprising: measuring at least one of: a steering angle of the vehicle; and a hitch angle between the vehicle and the trailer; and wherein determining whether any of the two first regions of interest includes a portion of at least one of the trailer and the connecting means is based on the at least one measured steering angle and hitch angle.
8. The method of claim 2, wherein in response to determining that the vehicle is travelling in a reverse direction, selecting the second camera to acquire the first image at the first time.
9. The method of claim 8, wherein the potential road portion corresponding to the at least one first region of interest of the first image can becomes viewable within the field of view of the first camera as the vehicle travels in the reverse direction.
10. The method of claim 9, wherein selecting the at least one first region of interest of the first image comprises: defining at least two first regions of interest, one to include a corresponding first road portion that can become viewable within the field of view of the first camera at a first side of the region bounding the connecting means, and the other at a second side of the region bounding the connecting means, when the trailer is aligned to the vehicle along a longitudinal axis; measuring at least one of: a steering angle of the vehicle; and a hitch angle between the vehicle and the trailer; in response to determining that the at least one measured steering angle and hitch angle is below a threshold, selecting both the two first regions of interest (to be sampled from the first image; and in response to determining that the at least one measured steering angle and hitch angle exceeds the threshold, selecting one of the two regions of interest based on the at least one measured steering angle and hitch angle.
11. The method of claim 1, wherein the first camera and the second camera are a front camera and a rear camera of the vehicle.
12. The method of claim 1, comprising: converting the image data within the first and second regions of interest into a YUV format; wherein determining whether both the first and second regions of interest include the road portion comprises: estimating at least one first luma component from at least a portion of the image data within the first region of interest, and at least one second luma component from at least a portion of the image data within the second region of interest; comparing the difference between the estimated first and second luma components with a threshold; and responsive to a determination that the difference is below the threshold, verifying that both the first and second regions of interest include the road portion.
13. The method of claim 12, further comprising, responsive to the verification that both the first and second regions of interest include the road portion: determining one or more brightness correction parameters for harmonizing brightness of images acquired by the first and second cameras, based on the difference between the estimated first and second luma components; estimating first chrominance components from at least a portion of the image data within the first region of interest and second chrominance components from at least a portion of the image data within the second region of interest; and determining one or more chrominance correction parameters for harmonizing the colour of images acquired by the first and second cameras, based on a difference between the estimated first and second chrominance components.
14. The method of claim 1, further comprising: generating a combined view including merged images acquired by the first and second cameras, including applying the determined one or more harmonization correction parameters.
15. An automotive multi-camera vision system comprising: first and second cameras connected to a vehicle and arranged in such a way as their fields of view cover a same road space at different times as the vehicle travels along a travel direction; and one or more processing units configured to perform the method of claim 1.
16. A combination of a vehicle and a trailer , comprising: the multi-camera vision system of claim 15, wherein the first camera is the rear camera of the vehicle and the second camera is the rear camera of the trailer.
17. A computer program with instructions, which, upon execution to be executed by an automotive multi-camera vision system according to claim 15.
Description
BRIEF SUMMARY OF THE DRAWINGS
[0019] Embodiments of the invention will now be described, by way of example, with reference to the accompanying drawings, in which:
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
DESCRIPTION OF THE EMBODIMENTS
[0027] Referring now to
[0028] The car 11 is provided with a hitch 13 allowing the car 11 to tow objects, such as the trailer 12 illustrated in
[0029] It is to be noted that the trailer 12 illustrated in
[0030] The multi-camera vision system includes a plurality of cameras disposed at the front (FV camera), rear (RV camera) and on the left and right side mirrors (ML and MR cameras) of the vehicle for capturing images of the environment surrounding the vehicle. The side cameras need not necessarily be located on the mirrors, and these can be located at any location suitable for acquiring an image from the environment to the sides of a vehicle.
[0031] The system further includes a trailer rear camera (TR camera) directed rearwardly of the trailer 12 (and in some cases, can also include side cameras pointing outwardly from respective sides of the trailer 12). As such, as illustrated in
[0032] The system further comprises a vehicle ECU 15 running an application configured to receive images acquired by the vehicle cameras FV, RV, MR, ML, and a controller 16 within the trailer 12 that is configured to collect the images acquired by the trailer camera TR (as well by the trailer side cameras, if present). The images collected by the trailer controller 16 are streamed or otherwise provided, through either a wired or wireless connection, to the vehicle ECU 15. In some cases, any trailer camera can be connected directly to the vehicle ECU 15.
[0033] A processor of the vehicle ECU 15 is configured to process the images received from the vehicle and trailer cameras FV, RV, MR, ML, TR with the purpose of providing processed images to a display 22 or windscreen located within a cabin of the car 11. Such camera information can also be processed by the ECU 15 to perform autonomous or semi-autonomous driving, parking or braking of the vehicle as well as for example, storing streams of images captured by one or more of the cameras as dashcam or security footage for later retrieval.
[0034] The ECU 15 (or another processing unit within the car 11) can also estimate a distance travelled by the car 11 over time, by processing sensor data provided by odometry sensors (schematically represented and cumulatively indicated with numeral reference 17 in
[0035] With reference now to
[0036] At method step 101, the travel direction of the car 11 is determined, by using for example the odometry sensors 17 and/or GPS location tracking information.
[0037] With reference to
[0038] Responsive to a determination that the car 11 is travelling along the forward direction illustrated in
[0039] Then, two ROIs 201, 202 are selected to be sampled from the image 200 (step 103). In particular, the ROIs 201, 202 are positioned and dimensioned within the acquired image 200 in such a way as to correspond to the road portions 50 and 51, respectively, beside the drawbar 14.
[0040] One exemplary method to select the two ROIs 201, 202 is now disclosed.
[0041] When the vehicle ECU 15 receives the image 200 acquired at t.sub.1, the ECU 15 is configured to check two ROIs 201, 202 where road portions beside the sides 500, 510 of the drawbar 14 are expected to be included, assuming that the trailer 12 is substantially aligned to the car 11 along a longitudinal axis.
[0042] For example, the ECU 15 is configured to define these ROIs 201, 202 by knowing an image area occupied by the trailer 12 and drawbar 14, when the trailer 12 is substantially aligned to the car 11. In one implementation, the ECU 15 can learn this area by detecting the trailer 12 and drawbar 14 within a set of images acquired by the RV camera and including the trailer 12 aligned with the car 11. This provides for a high degree of accuracy of ROI (position and size), however, it will be appreciated that this approach adds complexity in terms of implementation. Alternatively, the ECU 15 can estimate this area by knowing dimensional parameters of the vehicle 11 and drawbar 14 (e.g., at least the width of the vehicle 11 and the length of the drawbar 14). This information can be provided to the ECU 15 in various ways, including: receiving this information from a user's input, receiving a scan of the trailer 12 and drawbar 14, or obtaining vehicle CAD data possibly through a network connection. In any case, a default ROI position can be determined based on a known position of the camera RV on the vehicle 11 from the vehicle CAD, as well as a known width for the vehicle 11 (which can also be obtained from the vehicle CAD). This in turn indicates a shortest length for a suitable drawbar—these are supposed to be at least half as long as the vehicle width. This allows a default position for the ROIs to be determined with minimum user input and processing power required.
[0043] Then, the ECU 15 determines whether any of the checked ROIs 201, 202 includes a portion of the drawbar 14 or the trailer 12 (due to steering the car 11 at the image acquisition time t.sub.1). In an embodiment, the ECU 15 applies image detection on the ROIs 201, 202 to detect whether any of these ROIs 201, 202 contains a portion of the drawbar 14 or the trailer 12. In another embodiment, the ECU 15 uses odometry data provided by the sensors 17 and/or GPS location information to measure a steering angle of the car 11 at the image acquisition time t.sub.1, and compare the measured angle with a threshold. Responsive to a determination that the measured steering angle has a value below the threshold (including a null value), the ECU 15 determines that none of the ROIs 201, 202 contains a portion of the drawbar 14 or the trailer 12. In addition or as an alternative, a similar determination can be performed by the ECU 15 using a measured hitch angle between the longitudinal axes of the car 11 and the trailer 12. This angle can be detected in any number of ways, for example using image information from the acquired image 200 to detect a rotation of the trailer 12 around a vertical axis passing through the hitch 14. Equally, image information from the vehicle mirror cameras ML, MR can detect features from the surface of the trailer moving laterally within their respective fields of view to estimate the relative angle of the vehicle and trailer. Other techniques for determining the relative angle of the vehicle and trailer include using information from rear facing ultrasonic or radar sensors mounted to the rear of the vehicle 11 (where changing differences measured by the sensors signal changes in the relative angle of the car 11 and trailer 12).
[0044] With reference back to the image 200 illustrated in
[0045] It is to be further noted from
[0046] With reference back to
[0047] In these cases, the method step 103 includes selecting only one of the ROIs 201, 202, corresponding to the road portion 50, 51 that can be captured by the RV camera according to the steering direction.
[0048] In other embodiments, when the ECU 15 receives an image acquired by the RV camera at t.sub.1, the ECU 15 can perform detection of the trailer 12 and drawbar 14 to determine the image area occupied by the trailer 12 and drawbar 14, and select accordingly one or more ROIs 201, 202 around the detected area that can include respective road portions 50, 51 beside the sides 500, 510 of the drawbar 14. In some other embodiments, the selection of the ROIs can be based on a detection of road portions within the captured scene, e.g., by using a texture-oriented method or by evaluating the pixel intensity.
[0049] Furthermore, although the above disclosed embodiments are based on sampling road portions 50, 51 beside the drawbar 14 from the image acquired by the RV camera at acquisition time t.sub.1, it will be appreciated that, in addition or as an alternative, also road portions viewable within the field of view of the FOV.sub.1 of the RV camera beside the trailer 12 can be sampled as references for image harmonization. In this case, the trailer's shadow projection on the road 18 is to be considered in the selection of the ROIs (as the trailer's shadow projection can cover one of the surrounding road portions depending on the orientation of the sun, as can be seen in
[0050] The description of method 100 now continues referring back to the case where the two ROIs 201, 202 are selected at method step 103 to be sampled from the image 200 illustrated in
[0051] The selected ROIs 201, 202 are sampled from the image 200 (step 104) and the respective image data stored within a memory of the system or other storage means accessible by the system (e.g. a database or server that can be accessed by the system via network connection).
[0052] Then, at method step 105, a distance travelled by the car 11 after the acquisition time t.sub.1 of image 200 is monitored to determine a second t.sub.2 to acquire a second image by the TR camera of the trailer 12, in such a way that the same road portions 50, 51 corresponding to the ROIs 201, 202 sampled from the image 200 (acquired by the RV camera of the car 11) can be included in corresponding ROIs defined in the second image. The travelled distance can be monitored using the odometry data provided by the sensors 17 and/or GPS tracking information.
[0053] For example,
[0054] A time t.sub.2 is determined, corresponding to travelled distance dx, and an image 300 is acquired by the TR camera at t.sub.2 (step 106). The acquired image 300 is illustrated in
[0055] With reference back to
[0056] In any case, the determined acquisition time t.sub.2 for the TR camera can correspond to a travelled distance either greater or less than dx, as long as the road portions 50, 51 can still be viewable within the field of view FOV.sub.2 of the TR camera.
[0057] After the acquisition of the image 300 at t.sub.2, the ROIs 301, 302 are sampled (step 107), and the respective image data stored within the memory of the system (or other storage means accessible by the system).
[0058] With reference back to the initial method step 101, an operation of the method 100 is now disclosed in response to determining that the direction of the car 11 is a reverse direction.
[0059] In particular, with reference to
[0060] Responsive to the determination that the car 11 is travelling along the reverse direction illustrated in
[0061] Then, two ROIs 401, 402 are selected to be sampled from the image 400 (step 109). In particular, the ROIs 401, 402 are positioned and dimensioned within the acquired image 400 in such a way as to include the road portions 60 and 61.
[0062] One exemplary method to select the two ROIs 401, 402 is now disclosed.
[0063] When the vehicle ECU 15 receives the image 400 acquired at t.sub.1, the ECU 15 is configured to check two ROIs 401, 402 corresponding to road portions that can be viewable by the RV camera beside the sides 500, 510 of the drawbar 14, assuming that the trailer 12 is substantially aligned to the car 11 along a longitudinal axis. For example, the ECU 15 is configured to define these ROIs 401, 402 by knowing the image area that is occupied by the trailer 12 and trailer drawbar 14, when the trailer 12 is substantially aligned with the car 11.
[0064] Then, the ECU 15 determines whether the car 11 is reversing along a substantially straight trajectory. For example, the ECU 15 uses the odometry data provided by the sensors 17 and/or GPS tracking information to measure a steering angle of the car 11 or a hitch angle between the car 11 and the trailer 12, at the image acquisition time t.sub.1, and compare the measured angle with a threshold. Responsive to a determination that the measured steering angle or hitch angle has a value below the threshold (including a null value), the ECU 15 determines that the car 11 is reversing along a straight direction. Responsive to this determination, the ECU selects the two ROIs 401, 402 to be sampled from the image 400.
[0065] With reference back to
[0066] In these cases, the method step 109 includes selecting only one of the ROIs 401, 402, corresponding to the road portion 60, 61 that can be captured also by the RV camera according to the steering direction.
[0067] The description of method 100 now continues referring back to the case where two ROIs 401, 402 are selected, at method step 109, to be sampled from the image 400 illustrated in
[0068] The ROIs 401, 402 are sampled from the image 400 (step 110) and the respective image data stored within the memory of the system (or other storage means accessible by the system).
[0069] Then, at method step 111, a distance travelled by the car 11 after the acquisition time t.sub.1 of image 400 is monitored to determine a second time t.sub.2 to acquire a second image by the RV camera of the car 11, such that the road portions 60, 61 corresponding to the ROIs 401, 402 sampled from the image 400 (acquired by the TR camera) can be included in corresponding regions of interest defined in the second image.
[0070] For example,
[0071] The acquired image 600 is illustrated in
[0072] With reference back to
[0073] The method 100 then proceeds by sampling the ROIs 601, 602 from the image 600 (step 113), and the respective image data are stored within the memory of the system (or other storage means accessible by the system).
[0074] A harmonization process according to the execution of the method 100 is now disclosed for simplicity only with reference to the ROIs 201, 202, 301, 302 sampled as per the operation of steps 102-107 of the method 100 (following the determination of a forward direction at initial step 101). It is to be noted that the principles of this disclosure equally apply to the operation of the harmonization process based on the ROIs 401, 402, 601, 602 sampled as per the operation of steps 108-113 of the method 100 (following the determination of a reverse direction at initial step 101).
[0075] The image data of the sampled ROIs 201, 202 (extracted from the image 200 acquired by the RV camera at t.sub.1) and the image data of the sampled ROIs 301, 302 (extracted from the image 300 acquired by the TR camera at t.sub.2) are retrieved from the memory of the system (or other storage means accessible by the system) and provided to a harmonisation network (that can be implemented by the vehicle ECU 15 or another processing unit of the system), where the retrieved image data are converted into a YUV format (step 114) if this has not been done already.
[0076] Then, luminance components Y.sub.1 and Y.sub.2 are estimated from the pixel data of the ROIs 201, 202, as well as luminance components Y.sub.3 and Y.sub.4 being estimated from the pixel data of the ROIs 301, 302 (step 115). Various methods can be used to estimate Y.sub.1 to Y.sub.4. For example, some techniques to estimate Y.sub.1 to Y.sub.4, based on histograms generated to describe the luminance of the ROIs 201, 202, 301, 302, are disclosed in WO2018/087348 referenced above (including a non-segmentation based approach, a histogram-segmentation based approach, and a bi-modal histogram segmentation approach).
[0077] Based on the appreciation that luminance values of different imaged objects are significantly different, a difference between the estimated Y.sub.1 and Y.sub.3 of the ROIs 201, 301 is compared to a threshold for the purpose of verifying whether both these ROIs 201, 301 include the same reference road potion 50 (step 116).
[0078] Responsive to a determination that the absolute value of Y.sub.1−Y.sub.3 is below the threshold, it is assumed that this minor difference is due to a lack of brightness harmonization between the RV and TR cameras. As such, the image data within the ROIs 201, 301 is verified to belong to the same reference road portion 50.
[0079] Responsive to a determination that the absolute value of Y.sub.1−Y.sub.3 exceeds the threshold, the image data within the ROIs 201, 301 is determined to belong to different imaged objects. For example, this can correspond to the case where an object (such as another vehicle or person) moved into the road portion 50 between the acquisition times t.sub.1 and t.sub.2 of the images 200, 300 from which the ROIs 201, 301 are extracted. In another case, an object can cover the road portion 50 at t.sub.1, and move away from the portion 50 between the acquisition times t.sub.1-t.sub.2.
[0080] A similar verification is performed to verify whether both the two ROIs 202, 302 include the same reference road portion 51, by comparing a difference between Y.sub.2 and Y.sub.4 with the threshold (Step 116).
[0081] Responsive to a determination that the absolute value of at least one of the differences Y.sub.1−Y.sub.3 and Y.sub.2−Y.sub.4 is below the threshold, such a difference is used to determine correction parameters for harmonizing the brightness of images acquired by the RV camera of the car 11 and the TR camera of the trailer 12 (step 117). Various methods can be applied to determine the brightness correction parameters based on luminance difference values, such as the method disclosed in WO2018/087348. Once determined, the brightness correction parameters can be stored in the memory of the system (or any other storage means accessible by the system).
[0082] Furthermore, chrominance values U.sub.1, V.sub.1 and U.sub.2, V.sub.2 are estimated from the pixel data of the ROIs 201, 202, as well as chrominance values U.sub.3, V.sub.3 and U.sub.4, V.sub.4 being estimated from the pixel data of the ROIs 301, 302. The values of the differences U.sub.1−U.sub.3, V.sub.1−V.sub.3 are used to determine correction parameters for harmonizing the colour of images acquired by the RV and TR cameras (step 117). Various methods can be applied to determine the colour correction parameters based on luminance difference values, such as the method further disclosed in WO2018/087348. Once determined, the colour correction parameters can be stored in the memory of the system (or any other storage means accessible by the system).
[0083] In some embodiments, each of the differences U1−U3, V1−V3 is used to calculate colour parameters only upon verification that its value is below a threshold.
[0084] Furthermore, although the above disclosed embodiment is based on a comparison between Y, U, V values estimated for describing the whole data content of the ROIs 201, 202, 301, 302, in other embodiments the ROIs 201, 202, 301 can be divided in sub-regions for which respective Y,U,V are estimated and compared to determine the harmonization parameters. The subregions can correspond to single pixels or group of pixels within the ROIs 201, 202, 301, 302.
[0085] After calculation of the Y,U,V correction parameters, the method 100 can be re-executed at a later stage, starting again from step 101 to determine the direction of the car 11. For example, the system can be configured to initiate the method periodically (and/or triggered by a specific driving activity/environment condition). In this way the stored harmonization correction parameters are updated over time.
[0086] With reference back to method step 116, the method 100 is also re-executed after a determination that both the differences Y1−Y3 and Y2−Y4 have an absolute value exceeding the threshold (and this determination can trigger the re-execution of the method 100).
[0087] The determined harmonization correction parameters can then be retrieved by the system, when required to be applied (step 118) in the process of generating a combined view included merged images acquired by the RV and TR cameras, such as an invisible trailer view to be displayed on the main display 22 of the car 11 or on a windscreen that provides a digital rear mirror. In some embodiments, the harmonization correction parameters are applied to at least one of the images acquired by the RV and TR cameras before these images are merged into the combined view. In other embodiments, the harmonization correction parameters are applied to the combined view, particularly in the merging region between the images acquired by the RV and TR cameras.
[0088]
[0089] Other combined views can benefit from applying the harmonization correction parameters obtained by the operation of method 100, such as a top view of the environment surrounding the trailer 12, that can be displayed on the display 22 and used to perform autonomous or semi-autonomous operations, or provide a footage that can be stored and retrieved at a later stage (e.g. for investigation after an accident, or theft of the trailer's content).
[0090] With reference back to method step 116, if both the differences Y.sub.1−Y.sub.3 and Y.sub.2−Y.sub.4 are determined to have an absolute value exceeding the threshold, no updated harmonization parameters are available to the system to harmonize a combined view. Thus, the system can determine whether correction parameters previously generated and stored according to the operation of method 100 are available (step 119). Responsive to a positive determination, the system can apply the previous correction parameters to harmonize the combined view (step 120). Responsive to a negative determination (e.g., because only one iteration of the method 100 has been performed or the previous parameters are not retrievable), no harmonization is applied (step 121—and in this case, the negative determination can trigger the re-execution of the method 100).
[0091] Although the execution of the method 100 has been disclosed to harmonize the RV and TR cameras of the car 11 and the trailer 12, the same principles can be similarly applied to harmonize the front FV camera and rear camera RV of the car 11 (or other vehicle, with or without a trailer), based on sampling the same road portions by the FV and RV cameras as the car 11 travels along a travel direction, and using the sampled road portions as common reference for harmonization.