METHOD AND SYSTEM FOR DETERMINING LIDAR INTENSITY VALUES, AND TRAINING METHOD

20230162382 · 2023-05-25

Assignee

Inventors

Cpc classification

International classification

Abstract

A computer-implemented method as well as a system for determining intensity values of pixels of distance data of the pixels generated by a simulation of a 3D scene, including an assignment of a first confidence value to each of the first initial values of the pixels and/or a second confidence value to each of the second intensity values of the pixels, and including a calculation of third, in particular corrected, intensity values of the pixels, using the confidence values assigned to each of the first intensity values and/or second intensity values. The invention also relates to a computer-implemented method for providing a trained machine learning algorithm as well as to a computer program.

Claims

1. A computer-implemented method for determining intensity values of pixels of distance data of the pixels generated by a simulation of a 3D scene, the method comprising: providing the distance data of the pixels; applying a machine learning algorithm to the distance data, which outputs first intensity values of the pixels; applying a light beam tracking method to the distance data to determine second intensity values of the pixels using precaptured or calibrated material reflection values for a first plurality of pixels and/or using a statistical method for a second plurality of pixels; assigning a first confidence value to each of the first intensity values of the pixels and/or a second confidence value to each of the second intensity values of the pixels; and calculating third corrected intensity values of the pixels using the confidence values assigned to each of the first intensity values and/or the second intensity values.

2. The computer-implemented method according to claim 1, wherein the third corrected intensity values of the pixels are calculated by forming a weighted mean value made up of a sum product having a first product of the particular first intensity value and the assigned first confidence value and a second product of the particular second intensity value and the assigned second confidence value divided by a sum of the confidence values of the particular pixels.

3. The computer-implemented method according to claim 1, wherein a higher confidence value is assigned to the second intensity values determined for the first plurality of pixels using the precaptured, in particular calibrated, material reflection values, than is assigned to the second intensity values determined for the second plurality of pixels using the statistical method.

4. The computer-implemented method according to claim 1, wherein camera image data, in particular RGB image data, of the pixels are provided, the distance data of the pixels and the camera image data of the pixels being provided by the simulation of the 3D scene.

5. The computer-implemented method according to claim 1, wherein the simulation of the 3D scene generates raw distance data of the pixels as a 3D point cloud, which are transformed by an image processing method into 2D spherical coordinates and are provided as, in particular 2D, distance data of the pixels.

6. The computer-implemented method according to claim 1, wherein the machine learning algorithm and the light beam tracking method process the provided distance data of the pixels simultaneously.

7. The computer-implemented method according to claim 1, wherein the calculated third, in particular corrected, intensity values of the pixels are used in the simulation of the 3D scene, in particular in a traffic simulation.

8. The computer-implemented method according to claim 1, wherein precaptured or calibrated material reference values for the first plurality of pixels are determined by a bidirectional reflection distribution function.

9. A computer-implemented method for providing a trained machine learning algorithm to determine intensity values of pixels of distance data of the pixels generated by a simulation of a 3D scene, the method comprising: receiving a first training data set of distance data of pixels; receiving a second training data set of intensity values of the pixels; and training the machine learning algorithm using an optimization algorithm, which calculates an extreme value of a loss function for determining the intensity values of the pixels.

10. The computer-implemented method according to claim 9, wherein the first training data set includes distance data of the pixels captured by a surroundings capturing sensor, in particular a LIDAR sensor, and the second training data set includes intensity values of the pixels captured by the surroundings capturing sensor, or the first training data set includes distance data of the pixels captured by a surroundings capturing sensor, in particular a LIDAR sensor and generated by a simulation of a 3D scene, and the second training data set includes intensity values of the pixels captured by the surroundings capturing sensor and generated by a simulation of a 3D scene.

11. The computer-implemented method according to claim 9, wherein the first training data set includes camera image data, in particular RGB image data, of the pixels captured by a camera sensor.

12. The computer-implemented method according to claim 9, wherein the first training data set includes distance data of the pixels, and the second training data set includes intensity values of the pixels under different environmental conditions in each case, in particular different weather conditions, visibility conditions, and/or times of day.

13. The computer-implemented method according to claim 12, wherein an unmonitored domain adaptation is carried out, using non-annotated data of the distance data of the pixels and/or the intensity values of the pixels.

14. A system to determine intensity values of pixels of distance data of the pixels generated by a simulation of a 3D scene, the system comprising: a determinator to provide the distance data of the pixels; a first control unit configured to apply a machine learning algorithm and to output first intensity values of the pixels to the distance data; a second control unit configured to apply a light beam tracking method to the distance data to determine second intensity values of the pixels using precaptured or calibrated material reflection values for a first plurality of pixels and/or using a statistical method for a second plurality of pixels; an assignor to assign a first confidence value to each of the first intensity values of the pixels and/or a second confidence value to each of the second intensity values of the pixels; and a processor to calculate third, in particular corrected, intensity values of the pixels using the confidence values assigned to each of the first and/or second intensity values.

15. A computer program including program code for carrying out the method according to claim 1 when the computer program is executed on a computer.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0045] The present invention will become more fully understood from the detailed description given hereinbelow and the accompanying drawings which are given by way of illustration only, and thus, are not limitive of the present invention, and wherein:

[0046] FIG. 1 shows a flowchart of a computer-implemented method for determining intensity values of pixels of distance data of the pixels generated by a simulation of a 3D scene according to one preferred specific embodiment of the invention;

[0047] FIG. 2 shows a schematic representation of a system for determining intensity values of pixels of distance data of the pixels generated by a simulation of a 3D scene according to the preferred specific embodiment of the invention; and

[0048] FIG. 3 shows a flowchart of the method for providing a trained machine vision algorithm for determining intensity values of pixels of distance data of the pixels generated by a simulation of a 3D scene according to the preferred specific embodiment of the invention.

DETAILED DESCRIPTION

[0049] The method shown in FIG. 1 for determining intensity values 10 of pixels 12 of distance data 16 of pixels 12 by a simulation 14 of a 3D scene comprises a provision 51 of distance data 16 of pixels 12 as well as an application S2 of a machine learning algorithm A to distance data 16, which outputs first intensity values 10a of pixels 12.

[0050] The method further comprises an application S3 of a light beam tracking method V to distance data 16 for determining second intensity values 10b of pixels 12, using precaptured, in particular calibrated, material reflection values 15 for a first plurality of pixels 12a and/or a statistical method 18 for a second plurality of pixels 12b.

[0051] The method also comprises an assignment S4 of a first confidence value K1 to each of first intensity values 10a of pixels 12 and/or a second confidence value K2 to each of second intensity values 10b of pixels 12, and a calculation S5 of third, in particular corrected, intensity values 10c of pixels 12, using confidence values K1, K2 assigned to each of first intensity values 10a and/or second intensity values 10b.

[0052] Third, in particular corrected, intensity values 10c of pixels 12 are calculated by forming a weighted mean value from a sum product having a first product of particular first intensity value 10a and assigned first confidence value K1, and a second product of particular second intensity value 10b and assigned second confidence value K2, divided by a sum of confidence values K1, K2 of particular pixels 12.

[0053] Alternatively, the particular pairs made up of first intensity value 10a and assigned first confidence value K1 as well as second intensity value 10b and assigned second confidence value K2 may be calculated using an alternative statistical method for determining corrected intensity values 10c of pixels 12.

[0054] A higher confidence value K1, K2 is assigned to second intensity values 10b determined for the first plurality of pixels 12a, using precaptured, in particular calibrated, material reflection values 15, than is assigned to second intensity values 10b determined for the second plurality of pixels 12b using statistical method 18.

[0055] Camera image data 20, in particular RGB image data, of pixels 12 are also provided. Distance data 16 of pixels 12 and camera image data 20 of pixels 12 are provided using simulation 14 of the 3D scene.

[0056] Simulation 14 of the 3D scene generates raw distance data 16 of pixels 12 as a 3D point cloud, which are transformed into 2D spherical coordinates using an image processing method 22 and are provided as, in particular 2D, distance data 16 of pixels 12. Machine learning algorithm A and light beam tracking method V process provided distance data 16 of pixels 12 simultaneously.

[0057] Calculated third, in particular corrected, intensity values 10 of pixels 12 are used in simulation 14 of the 3D scene, in particular in a traffic simulation 14. Precaptured, in particular calibrated, material reflection values 15 for the first plurality of pixels 12a are determined by a bidirectional reflection distribution function.

[0058] FIG. 2 shows a schematic representation of a system for determining intensity values of pixels of distance data of the pixels generated by a simulation of a 3D scene according to the preferred specific embodiment of the invention.

[0059] The system comprises a determinator 30 for providing distance data 16 of pixels 12 as well as a first control unit 32, which is configured to apply a machine learning algorithm A, which outputs first intensity 10 values of pixels 12, to distance data 16.

[0060] The system further comprises a second control unit 34, which is configured to apply a light beam tracking method V to distance data 16 for determining second intensity values 10 of pixels 12, using precaptured, in particular calibrated, material reflection values 15 for a first plurality of pixels 12a and/or using a statistical method 18 for a second plurality of pixels 12b.

[0061] The system further comprises an assignor 36 for assigning a first confidence value K1 to each of first intensity values 10 of pixels 12 and/or a second confidence value K2 to each of second intensity values 10 of pixels 12, as well as a processor 38 for calculating third, in particular corrected, intensity values 10 of pixels 12, using confidence values K1, K2 assigned to each of first and/or second intensity values 10.

[0062] FIG. 3 shows a flowchart of the method for providing a trained machine vision algorithm for determining intensity values of pixels of distance data of the pixels generated by a simulation of a 3D scene according to the preferred specific embodiment of the invention.

[0063] The method comprises a receipt S1′ of a first training data set TD1 of distance data 16 of pixels 12 as well as a receipt S2′ of a second training data set TD2 of intensity values 10 of pixels 12.

[0064] The method also comprises a training S3′ of machine learning algorithm A by an optimization algorithm 24, which calculates an extreme value of a loss function for determining intensity values 10 of pixels 12.

[0065] First training data set TD1 includes distance data 16 of pixels 12 captured by a surroundings capturing sensor 26, in particular a LIDAR sensor, and the second training data set includes intensity values 10 of pixels 12 captured by surroundings capturing sensor 26.

[0066] Alternatively, first training data set TD1 may include distance data 16 of pixels 12 captured by a surroundings capturing sensor 26, in particular, a LIDAR sensor and generated by a simulation 14 of a 3D scene. Second training data set TD2 further includes intensity values 10 of pixels 12 captured by surroundings capturing section 26 and generated by a simulation 14 of a 3D scene.

[0067] First training data set TD1 additionally includes camera image data 20, in particular RGB image data, of pixels 12 captured by a camera sensor 28.

[0068] First training data set TD1 includes distance data 16 of pixels 12, and second training data set TD2 includes intensity values 10 of pixels 12, under different environmental conditions in each case, in particular different weather conditions, visibility conditions, and/or times of day.

[0069] An unmonitored domain adaptation is also carried out, using non-annotated data of distance data 16 of pixels 12 and/or intensity values 10 of pixels 12.

[0070] The invention being thus described, it will be obvious that the same may be varied in many ways. Such variations are not to be regarded as a departure from the spirit and scope of the invention, and all such modifications as would be obvious to one skilled in the art are to be included within the scope of the following claims.