METHOD FOR PROVIDING IMAGE DATA FROM A CAMERA SYSTEM, CAMERA SYSTEM AND MOTOR VEHICLE
20170308999 · 2017-10-26
Assignee
Inventors
Cpc classification
H04N23/45
ELECTRICITY
G06T2207/20016
PHYSICS
International classification
Abstract
The invention relates to a method for providing image data (24) from a camera system (3) for a motor vehicle (1), wherein the camera system (3) includes at least one camera, in particular a plenoptic camera (4), including a lens (6) and a sensor array (7), in which electromagnetic radiation (15, 17, 19, 21) is captured by means of the sensor array (7) and image data (24) of an environmental region (11) of the motor vehicle (1) is provided based on the captured electromagnetic radiation (15, 17, 19, 21) and the image data (24) is evaluated by means of an evaluation device (5), wherein a direction of incidence of the electromagnetic radiation (15, 17, 19, 21) on the sensor array (7) is determined by the evaluation device (5) based on the image data (24) provided by the sensor array (7) and the image data (24) is adapted by means of the evaluation device (5) depending on the determined direction of incidence.
Claims
1. A method for providing image data from a camera system for a motor vehicle, wherein the camera system includes at least one plenoptic camera, with a lens and a sensor array, the method comprising: capturing electromagnetic radiation by the sensor array; providing image data of an environmental region of the motor vehicle based on the captured electromagnetic radiation; evaluating the image data by an evaluation device; determining a direction of incidence of the electromagnetic radiation on the sensor array is determined by the evaluation device based on the image data provided by the sensor array; and adapting the image data by the evaluation device based on the determined direction of incidence and further sensor data in the visible wavelength range of the electromagnetic radiation is captured with at least one second sensor, wherein the image data is adapted depending on at least a first approximation image provided by low-pass filtering of first sensor data, and/or on at least a second approximation image provided by low-pass filtering of the further sensor data.
2. The method according to claim 1, wherein hazy and/or misty areas of the image data are adapted depending on the determined direction of incidence.
3. The method according to claim 1, wherein the sensor array includes a first sensor and the at least one second sensor, wherein first sensor data from an infrared wavelength range of the electromagnetic radiation is captured with the first sensor and the image data is additionally adapted depending on the first sensor data.
4. The method according to claim 3, wherein a near infrared range of the electromagnetic radiation is captured as the infrared wavelength range.
5. The method according to claim 1, wherein the camera system has at least two cameras, the method further comprising: determining a baseline describing a distance between the at least two cameras; and adapting the image data depending on the determined baseline.
6. The method according to claim 5, wherein an image depth value of the image data is determined based on the baseline, by which a distance to an object in the environmental region is described.
7. The method according to claim 6, wherein at least one selected from the group consisting of: a first contrast image determining a foreground of the first approximation image is provided based on the first approximation image, and a first contrast image determining a background of the first approximation image is provided based on the first approximation image, a second contrast image describing a foreground of the second approximation image is provided based on the second approximation image, and a second contrast image describing a background of the second approximation image is provided based on the second approximation image.
8. The method according to claim 7, wherein at least one selected from a group consisting of: the first approximation image, the second approximation image, and/or the first contrast image, and the second contrast image is/are provided in different resolutions.
9. The method according to claim 1, wherein the image data is adapted depending on a current GNSS position of the camera system.
10. The method according to claim 1, further comprising determining a current distance between the camera system and an object in the environmental region by a distance sensor; and adapting the image data depending on the determined distance.
11. A camera system for a motor vehicle comprising: at least one plenoptic camera, which includes a lens and a sensor array for capturing electromagnetic radiation and for providing image data of an environmental region of the motor vehicle based on the captured electromagnetic radiation; and an evaluation device for evaluating the image data, wherein the evaluation device is configured to perform a method according to claim 1.
12. The camera system according to claim 11, wherein lens is formed as a microlens array.
13. A motor vehicle with a camera system according to claim 11.
Description
[0022] Below, embodiments of the invention are explained in more detail based on schematic drawings.
[0023] There show:
[0024]
[0025]
[0026]
[0027] In
[0028] The plenoptic camera 4 includes a lens 6 and a sensor array 7. According to the embodiment, the lens 6 is a microlens array. The microlens array is an assembly of lenses, which can be both rotationally symmetrical and cylindrical. The lenses are disposed with a clearance as low as possible or no clearance. The dimensions of the individual lenses are between 1 millimeter and few millimeters or below according to application.
[0029] The plenoptic camera 4 can provide a spatial image, therefore, the microlens array is employed in an image plane of the plenoptic camera 4. Thereby, besides the two spatial directions (xy coordinates) in the image plane, the direction of an incident light beam can also be determined. Thus, the microlens array provides two angular coordinates based on the direction or the direction of incidence and the spatial directions in the image plane, which constitute the basis for the calculation of a depth map. The plenoptic camera thus mimics the binocular vision of the human eye.
[0030] In the embodiment according to
[0031] In the present embodiment, the evaluation device 5 is schematically shown. The evaluation device 5 can also be arbitrarily disposed in the motor vehicle 1. For example, the evaluation device 5 can be constituted by a controller (ECU, electronic control unit) of the motor vehicle 1.
[0032]
[0033] The sensor array 7 includes a first sensor 14 for capturing electromagnetic radiation in the infrared wavelength range 15. The first sensor 14 can for example be manufactured from a semiconductor material. The semiconductor material is preferably silicon. If the first sensor 14 is manufactured from silicon, thus, it captures the infrared wavelength range 15 up to 1.0 micrometers of the electromagnetic radiation, thus preferably in the range of the near infrared of the infrared wavelength range 15. Additionally or alternatively, the first sensor 14 can also be formed as a micro-bolometer or microtechnically manufactured bolometer. In this case, the medium and the far range of the infrared wavelength range 15 are substantially captured. The near infrared for example extends from 0.78 micrometers to 3.0 micrometers of wavelength. The medium infrared (MIR) extends for example from 3 micrometers to 50 micrometers of wavelength. The far infrared (FIR) extends for example from 50 micrometers to 1,000 micrometers of wavelength.
[0034] Furthermore, it is provided that a second sensor 16 is adapted for capturing the electromagnetic radiation in the blue visible wavelength range 17. A third sensor 18 is adapted to capture the electromagnetic radiation of a green visible wavelength range 19. And further, according to the embodiment, a fourth sensor 20 is adapted to capture the electromagnetic radiation in a red visible wavelength range 21. The sensors 14, 16, 18, 20 can for example be formed as CCD sensors or CMOS sensors. The arrangement of the first sensor 14 and/or of the second sensor 16 and/or of the third sensor 18 and/or of the fourth sensor 20 according to the embodiment 2 is to be merely exemplarily understood. The arrangement of the sensors 14, 16, 18, 20 is arbitrary, however, preferably such that the environmental region 11 can be captured.
[0035] Furthermore, a first baseline 22 is known, which describes the distance from the one plenoptic camera 4 to the other plenoptic camera 4. In addition, a second baseline 23 is known, which describes a distance from the one plenoptic camera 4 to the other plenoptic camera 4. Based on the first baseline 22 and/or the second baseline 23, for example, the advantages of a stereo principle can be used with the camera system 3, which allows depth estimation or the determination of a depth value in the image data 24.
[0036]
[0037] Thus, the image data 24 has sensor data from the infrared wavelength range 15 and/or the blue visible wavelength range 17 and/or the green visible wavelength range 19 and/or the red visible wavelength range 21. The portion of the different sensor data in the respective wavelength ranges 15, 17, 19, 21 can now be weighted depending on the characteristics of the environmental region 11. A characteristic of the environmental region 11 can for example be the brightness or the illumination of the environmental region 11. Thus, the image data 24 is composed or fused such that the brightness is taken into account.
[0038] Furthermore, it is provided that each of the sensors 14, 16, 18, 20 has an own filter to realize the wavelength range 15, 17, 19, 21 intended for this sensor 14, 16, 18, 20 and to exclude or suppress those wavelength ranges 15, 17, 19, 21, which are not desired. Thus, customary off-the-shelf cameras for example have an infrared blocking filter, which prevents penetration of the electromagnetic radiation of the infrared wavelength range 15 to the respective sensor 16, 18, 20.
[0039] Furthermore, it is provided that the portion of the infrared wavelength range in the image data 24 is increased with decreasing brightness and thus higher quality of the image data 24 can be provided. Usually, with decreasing brightness, a sensor has the possibility to compensate for this via a gain of the signal. This can be effected by an automatic gain control (AGC) and/or an automatic exposure control (AEC). It is the aim to provide an optimally exposed image or optimally exposed image data 24. To this, either an exposure time of the sensor 14, 16, 18, 20 can be increased or additionally or alternatively a signal of the sensor 14, 16, 18, 20 can be electronically amplified. Thus, based on the automatic gain control, it can be determined if the brightness in the environmental region 11 increases or decreases. Depending on that, now, the portion of the infrared wavelength range 15 in the image data 24 can also be controlled. Thus, it is provided that this portion of the infrared wavelength range 15 in the image data 24 increases with low brightness or decreasing brightness and decreases with increasing brightness or high brightness. This can be mathematically described as follows:
Image data 24=f((g*IR)+(1−g)*C),
[0040] wherein f is a function for generating the image data 24, g is a parameter of the automatic gain control, IR is first sensor data of the first sensor 14 from the infrared wavelength range 15 and C is further sensor data of the second sensor 16 and/or of the third sensor 18 and/or of the fourth sensor 20 from the visible wavelength range 17, 19, 21.
[0041] In a further embodiment, based on the image data 24 including the information of the 4D light field from the plenoptic camera and/or the first sensor data IR and/or the further sensor data C, a topography of a road in the environmental region 11 can be inferred. The topology of the road, thus for example potholes, contaminations and/or other conditions of the surface of the road, can be used for the driver assistance system 2 to control a chassis regulation of the motor vehicle 1 and/or an evasive maneuver of the motor vehicle 1.
[0042] Hazy and/or misty areas in the image data 24 and/or Rayleigh scattering can be removed or suppressed by the following adaptation of the image data 24. The first sensor data IR from the infrared wavelength range 15 is fused with the further sensor data C from the visible wavelength range 17, 19, 21. To this, the further sensor data C is transformed into a luminance-chrominance color space. Thus, a luminance image V.sub.0 of the visible wavelength range 17, 19, 21 can be provided. Furthermore, a NIR image N.sub.0 of the first sensor data IR can be provided. The luminance image V.sub.0 and the NIR image N.sub.0 are the input to the method for adapting the image data 24 or for the fusion of the image data 24. The output of the method is a fused luminance image F.sub.0. The chrominance information of the further sensor data C is not used during the fusion, but is simply combined with the fused luminance image F.sub.0 after fusion.
[0043] Different resolutions are provided by the luminance image F.sub.0 and the NIR image N.sub.0. To this, first, an approximation image V.sub.k+1.sup.a of the luminance image V.sub.0 is provided and an approximation image N.sub.k+1.sup.a of the NIR image N.sub.0 is provided.
V.sub.k+1.sup.a=W.sub.λ.sub.
N.sub.k+1.sup.a=W.sub.λ.sub.
[0044] W corresponds to a WLS filter as it was presented by Z. Farbmann, R. Fattal, D. Lischinski and R. Szeliski in the conference paper “Edge-preserving decompositions for multi-scale tone and detail manipulation” of the International Conference on Computer Graphics and Interactive Techniques, page 1 to 10, 2008. The parameter λ.sub.0 controls the coarseness of the respective approximation image V.sub.k+1.sup.a, N.sub.k+1.sup.a at the respective resolution step or the respective layer k+1. The parameter λ.sub.0 expresses the degree of coarseness of the first approximation image, while the further approximation images are coarser by a multiple of c. Thus, for example λ.sub.0=0.1 and c=2, while a resolution step, thus an overall number of layers, is set to n=6.
[0045] Finally, contrast images are determined. A contrast image V.sub.k.sup.d of the further sensor data and a contrast image N.sub.k.sup.d of the first sensor data. The contrast images are determined according to an approach of A. Toet, which describes a method for calculating the contrast images V.sub.k.sup.d, N.sub.k.sup.d in an article Hierarchical Image Fusion in Machine Vision and Applications, volume 3 number 1, pages 1 to 11, 1990. This can be mathematically expressed as follows:
[0046] The contrast images V.sub.k.sup.d N.sub.k.sup.d and the approximation images V.sub.k+1.sup.a N.sub.k+1.sup.a are represented in different resolutions of the n layers. A basic criterion of the fused luminance image F.sub.0 is that the NIR image N.sub.0 has a higher contrast if mist and/or a hazy area and/or Rayleigh scattering are present. Therefore, the maximum of the respective contrast image V.sub.k.sup.d, N.sub.k.sup.d is used for the fused luminance image F.sub.0. Furthermore, the low-frequency luminance information or color information of the approximation image V.sub.n.sup.a of the visible wavelength range 17, 19, 21 is used. The fused luminance image F.sub.0 can now be determined as follows:
[0047] Thus, the fused luminance image F.sub.0 is now adapted such that the hazy areas and/or the misty areas and/or the Rayleigh scattering in the image data 24 are reduced.
[0048] Furthermore, it is provided that the adaptation of the image data 24 is effected depending on a current distance between the camera system 3 and the object in the environmental region 11. The current distance is provided by means of a distance sensor of the motor vehicle 1. The distance sensor can for example be a radar sensor and/or an ultrasonic sensor and/or a lidar sensor and/or a laser scanner.
[0049] Additionally or alternatively, the fused luminance image F.sub.0, thus the adaptation of the image data 24, is performed depending on a current position of the camera system 3. The current position can for example be a position, which has been determined by means of a GNSS receiver. The GNSS receiver can for example be a GPS receiver and/or a GLONASS receiver and/or a Galileo receiver and/or a Baidou receiver. The current position can then be used to examine if the environmental region 11 extends across a free open space or if the objects in the environmental region 11 are disposed near the camera system 3, thus for example closer than 100 meters to the camera system 3, and thus occurrence of the hazy and/or the misty areas and/or the areas with Rayleigh scattering is unlikely or can be excluded.