BRIGHTNESS AND COLOR CORRECTION OF IMAGE DATA OF A LINE CAMERA

20220343478 · 2022-10-27

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for brightness and color correction of image data of a line camera is disclosed, wherein, for the detecting of image data with at least two line arrays of the line camera while illuminating a detection area of the line camera with an illumination module, a gray-scale image and at least two single-color images are recorded, and the image data is corrected with the help of a brightness function of the illumination module which is dependent on a line position of the line array and/or a distance of recorded objects. The brightness function is determined for the illumination module in advance and independent of the line camera and is stored in the illumination module, and the brightness function is read out by the line camera and used for the respective correction of the gray-scale image and the single-color images.

    Claims

    1. A method for brightness and color correction of image data of a line camera, wherein for detecting the image data with at least two line arrays of the line camera under illumination of a detection area of the line camera with an illumination module, a gray-scale image and at least two single-color images are recorded and the image data is corrected with the help of a brightness function of the illumination module which is dependent on a line position of the line array and/or a distance of recorded objects wherein the brightness function is determined for the illumination module in advance and independent of the line camera and is stored in the illumination module, and the brightness function is read out by the line camera and is used for the respective correction of the gray-scale image and the single-color images.

    2. The method according to claim 1, wherein the line camera is a line camera for code reading.

    3. The method according to claim 1, wherein the gray-scale image is used for the reading of codes.

    4. The method according to claim 1, wherein a color image is generated from the single-color images.

    5. The method according to claim 4, wherein the color image is used in particular to identify code-carrying objects and/or code regions, to classify them and/or to differentiate them from the image background.

    6. The method according to claim 1,wherein the brightness function is respectively modified by a color normalization function for the color of the single-color image, so that the correction of the gray-scale image and the single-color images is respectively carried out with its own brightness function, wherein a color normalization function sets for different line positions and distances the brightness of the illumination module for its color in proportion to the brightness over the entire spectrum.

    7. The method according to claim 6, wherein the color normalization functions are determined in advance generally for the type of illumination module.

    8. The method according to claim 6, wherein the color normalization functions are determined in advance individually for the illumination module, in particular with the brightness function.

    9. The method according to claim 6, wherein the color normalization functions are determined in advance individually for the illumination module with the brightness function.

    10. The method according to claim 1, wherein the brightness function is refined based on optical parameters of the line camera.

    11. The method according to claim 1, wherein the gray-scale image and the single-color images are recorded with different analog and/or digital gains.

    12. The method according to claim 1, wherein two single-color images are recorded in two of three primary colors.

    13. The method according to claim 12, wherein the third primary color is reconstructed from the gray-scale image and the two single-color images.

    14. The method according to claim 12, wherein the two primary colors are red and blue.

    15. The method according to claim 1, wherein corrected color values of a color image are formed from linear combinations of respective gray values of the gray-scale image and single-color values of the single-color images with color correcting weighting factors.

    16. The method according to claim 15, wherein corrected RGB-values R′G′B′ are formed as a gray-scale image with gray values W, a red image with red values R and a blue image with blue values B are formed as
    R′=x.sub.1*R+x.sub.2*(3*W-R-B)+x.sub.3*B+x.sub.4
    G′=x.sub.5*R+x.sub.6*(3*W-R-B)+x.sub.7*B+x.sub.8
    B′=x.sub.9*R+x.sub.10*(3*W-R-B)+x.sub.11*B+x.sub.12 with weighting factors x.sub.1 . . . x.sub.12

    17. The method according to claim 15, wherein the corrected color values are determined with a neural network, which is learned-in based on color images of at least one further color-sensitive sensor.

    18. A camera which comprises a line-shaped image sensor with at least two line arrays of light-receiving pixels for recording image data, and a control and evaluation unit for processing the image data, wherein the line arrays form a mono channel whose light-receiving pixels are sensitive to white light for recording a gray-scale image, and at least two color channels whose light-receiving pixels are respectively sensitive only to light in the color of its color channel, wherein the control and evaluation unit is configured so as to correct the image data in brightness and color according to the method of claim 1.

    19. The camera according to claim 18, wherein the camera is a code reader for reading an optical code.

    Description

    [0039] The invention is explained in more detail below also with respect to further features and advantages by way of example with reference to embodiments and with reference to the accompanying drawing. The figures in the drawing show in:

    [0040] FIG. 1 a schematic sectional view of a line camera;

    [0041] FIG. 2 a three-dimensional view of an application of the line camera in fixed mounting above a conveyor belt with objects, in particular for code reading;

    [0042] FIG. 3 a schematic representation of a line-shaped image sensor with a red line, a blue line and a white line;

    [0043] FIG. 4 a schematic representation of a line-shaped image sensor with a red line, a blue line and two white lines;

    [0044] FIG. 5 a schematic representation of a line-shaped image sensor with an alternating red-blue line and a white line;

    [0045] FIG. 6 a schematic representation of a line-shaped image sensor with two alternating red-blue lines and two white lines;

    [0046] FIG. 7 an exemplary flow chart for the generation of normalized gray-scale and single-color images;

    [0047] FIG. 8 an exemplary color normalization matrix for red;

    [0048] FIG. 9 an exemplary color normalization matrix for blue;

    [0049] FIG. 10 example images at different distances before and after normalization for the mono channel of the gray-scale image;

    [0050] FIG. 11 example images at different distances before and after normalization for the red-color channel of the red image;

    [0051] FIG. 12 example images at different distances before and after normalization for the blue-color channel of the blue image;

    [0052] FIG. 13 an exemplary spectrum of an illumination module for different distances; and

    [0053] FIG. 14 an exemplary illustration of the quantum efficiency of light-receiving pixels for different colors.

    [0054] FIG. 1 shows a very simplified block diagram of a line camera 10, which is preferably configured as a code reader for reading one- or two-dimensional optical codes. The line camera 10 detects received light 12 from a detection area 14 through a photographic lens 16, represented here only by a simple lens. A line-shaped image sensor 18 generates image data of the detection area 14 and the objects and code regions as necessary that are present there. The image sensor 18 has at least two lines 20a-b of light-sensitive receiving pixels 22, whereby in the line direction, preferably a plurality of hundreds, thousands or even more receiving pixels 22 are provided.

    [0055] The image data of the image sensor 18 is read out by a control and evaluation unit 24. The control and evaluation unit 24 is implemented in one or more digital components, for example microprocessors, ASICs, FPGAs or the like, which may also be provided in whole or in part outside the line camera 10. A preferred part of the evaluation is to put together detected image lines as an overall image. Otherwise, during the evaluation, the image data may in preparation be filtered, smoothed, tailored to specific areas or binarized. According to the invention, a brightness or color correction is provided, which will be explained in more detail further on in reference to FIGS. 7 to 14. In a preferred embodiment of the line camera 10 as a code reader, a segmentation is typically performed in which individual objects and code regions are found. The codes in these code regions are then decoded, that is, the information contained in the codes is read out.

    [0056] In order to illuminate the detection area 14 sufficiently brightly with transmitted light 26, an illumination module 28 having a light source 30, typically a plurality of light sources such as in the form of LEDs as well as transmission optics 32 is provided. The illumination module 28 is shown in FIG. 1 within a housing 34 of the line camera 10. This is a possible embodiment in which the illumination module 28 is inserted into a suitable slot of the line camera 10 later in production or even into the finished device after production, for example at the site of operation. Alternatively, the illumination module 28 has its own housing or is an external device and is connected to the line camera 10 for operation.

    [0057] Data can be output at an interface 36 of the line camera 10, namely, read code information as well as other data in various processing stages, such as raw image data, pre-processed image data, identified objects or code image data not yet decoded. On the other hand, it is possible to parameterize the line camera 10 via the interface 36 or a further interface.

    [0058] FIG. 2 shows a possible application of the line camera 10 mounted on a conveyor belt 38 that conveys objects 40 in a conveying direction 42 as indicated by the arrow, through the detection area 14 of the line camera 10. The objects 40 may carry code regions 44 on their outer surfaces. The task of the line camera 10 in this example application as a code reader, is to identify the code regions 44, read out the codes attached there, decode them and assign them to the respective associated object 40. In order to also identify code regions 46 attached to the side, preferably several line cameras 10 having different perspectives are used. Additional sensors may be added, for example, an upstream laser scanner for detecting the geometry of the objects 40 or an incremental encoder for detecting the speed of the conveyor belt 38. Stationary mounting of the line camera 10 on a conveyor belt 38 with objects 40 is also conceivable in image evaluation applications other than code reading.

    [0059] The detection area 14 of the line camera 10 is a plane with a line-shaped reading field corresponding to the line-shaped image sensor 18. Accordingly, the illumination module 28 generates a line-shaped illumination area that, apart from tolerances, corresponds to the reading field. In FIG. 2, the illumination module 28 is shown simply and purely schematically as a block within the line camera 10. As mentioned above, the illumination module 28 may be an external device. By recording line by line the objects 40 in the conveying direction 42, an overall image of the objects 40 which have conveyed past, together with the code regions 44, is gradually formed. The lines 20a-b lie so close to one another that they practically detect the same object section. Alternatively, an offset can also be computationally compensated for.

    [0060] The line camera 10 detects with its image sensor 18, on the one hand, a gray-scale image or a black-and-white image that is used for code reading. In addition, color information or a color image is also obtained. The color information may be used for a variety of additional functions. One example is the classification of objects 40, for example to find out whether it is a package, an envelope or a bag. It can be determined if a conveyor belt container is empty, such as the tray of a conveyor-tray or a box. Segmentation of the image data into objects 40 or code regions 44 can be performed based on, or supported by, the color information. Additional image recognition tasks may be solved, such as the recognition of specific imprints or labels, for example for hazardous goods labeling, or fonts can be read (OCR, Optical Character Recognition).

    [0061] FIGS. 3 to 6 show some examples of embodiments of the image sensor 18 for such detection of black-and-white images and color information. Common to these embodiments is that at least one of the lines 20a-d is a white line whose receiving pixels 22 detect light across the whole spectrum within the limits of the hardware. At least one other line 20a-d is a color line whose receiving pixels 22 are only sensitive to a particular color, in particular due to appropriate color filters. The distribution of colors over the respective receiving pixels 22 of the colored lines differs depending on the embodiment but deviates from the usual RGB and in particular from a Bayer pattern. Providing at least one complete white line is preferred because it allows a full resolution gray-scale image to be recorded. In addition, a separation into white and colored lines is clearer. In general, however, differing patterns of white and colored receiving pixels 22 mixed among the lines 20a-d are conceivable. The respective receiving pixels 22 of the same spectral sensitivity are combined in a mono channel for the gray-scale image or in a respective color channel for a single-color image, for example for red-sensitive receiving pixels 22 in a red-color channel for a red image and for blue-sensitive receiving pixels 22 in a blue-color channel for a blue image.

    [0062] FIG. 3 shows an embodiment with one red line 20a, one blue line 20b and one white line 20c each. The lines 20a-c are therefore homogeneous and the receiving pixels 22 within a line 20a-c are sensitive to the same optical spectrum. FIG. 4 shows a variation with an additional white line 20d.

    [0063] In the embodiment shown in FIG. 5, receiving pixels 22 sensitive to red and blue are alternately mixed within a color line 20a. Thus, in combination with a white line 20b, a structure with a total of only two lines is possible. FIG. 6 shows a variation in which both the color line 20a-b and the white line 20c-d are doubled.

    [0064] While for code reading, the high resolution of the white line is desired, in many cases the color information is only needed in a lower resolution. Therefore, a certain loss of resolution in the colored lines as in FIGS. 5 and 6 may, under the circumstances, not be disturbing at all. In some cases, it is even conceivable to artificially reduce the resolution by merging pixels (binning, down-sampling) and thus improve the signal-to-noise ratio.

    [0065] These examples are only a selection based on the primary colors red and blue with white (RBW). Further embodiments use other color filters and colors. Thus, also the use of green with red or blue (RGW, BGW) or all three primary colors (RGBW) would be conceivable. Furthermore, the subtractive primary colors blue-green (cyan), purple (magenta) and yellow in analogous combinations may also be considered (CMW, CYW, MYW or CMYW).

    [0066] The raw image data of the different colored receiving pixels 22 are in many respects too unbalanced to provide colors that can be used. This is firstly due to the spatial detection situation, since an object 40 at a great distance and at the edge of the lines 20a-d is exposed to a different illumination intensity than a close, central object 40. Accordingly, there is a spatial dependence in an X-direction of the lines 20a-d and in a Z-direction of the object distance. Moreover, the illumination module 28 has spectral characteristics in which the levels of brightness in the different wavelength ranges differ significantly from each other, especially when using semiconductor light sources such as LEDs. Furthermore, the spatial and spectral characteristics across the individual illumination modules 28 are scattered due to, for example, batch differences of the light sources 30 and other tolerances. In the following, various advantageous embodiments describe a brightness and color correction that compensates for the individual fluctuations of the illumination module 28 and/or general spectral and spatial fluctuations.

    [0067] FIG. 7 shows an exemplary flow chart for the generation of corrected or normalized gray-scale and single-color images, whereby, without being limited to this example, a mono channel for the gray-scale image and two color channels for a red image and a blue image will be described.

    [0068] The illumination module 28 is calibrated independently of the line camera 10, for example during final production, in order to be able to flexibly take into account its individual characteristics due to tolerances, batch differences and the like. For example, the illumination module 28 in the production is measured on a sliding table, whereby a number of light-receiving elements or photodiodes distributed laterally, i.e. in the X direction, respectively provides a brightness value for the respective (X, Z) position of the photodiode while being moved at different distances from the illumination module 28. This results in a brightness matrix which, for example, has a resolution of 10x10, i.e. measurements were made at ten distances with ten laterally distributed photodiodes or, alternatively, one photodiode shifted laterally ten times per distance. The resolution may of course differ, in particular the same resolution in the X and Z direction is by no means necessary, but too few values result in an incomplete compensation, while too many values increases unnecessarily the calibration effort.

    [0069] The brightness matrix 48 of the illumination module 28 obtained in advance in this way is stored in a preferably non-volatile memory of the illumination module 28 (EEPROM) and is a starting point of the flow chart in FIG. 7. For the actual application, preferably the illumination module 28 is connected to the line camera 10 already at the operating location. There is no need to determine beforehand which illumination module 28 will be used in which line camera 10 since the two devices can flexibly make themselves known to each other.

    [0070] As a first adjustment step, not shown in FIG. 7, in the line camera 10 different gain factors may be used in the mono channel and in the two color channels, i.e. gain.sub.color,i=k.sub.i gain.sub.mono with k.sub.i>1. A differentiation of the color channels among each other is optional, i.e. k.sub.i=k can be valid for all i color channels. In doing so, the mono channel and color channels already reach a similar dynamic range. If on the hardware side it is possible with the image sensor 18, such as separate white and color lines, the different gain that occurs is already analog and thus achieves better signal-to-noise characteristics. Alternatively or additionally, digital gains are possible. Pure digital gain factors may in a simplistic way be multiplied in the presented correction matrices of the color channels.

    [0071] For a brightness adjustment, the line camera 10 now reads out the brightness matrix 48 stored in the illumination module 28 in a mono channel refinement 50. Using optical parameters such as focal length, aperture and the like, a refined mono channel brightness matrix 52 is calculated which contains significantly more entries than the original brightness matrix 48. The mono channel brightness matrix 52 compensates for inhomogeneities in the illumination of this individual illumination module 28 along the line axis or X-axis and along the Z-axis, due to the decrease in intensity with increasing distance. In doing so, a white adjustment for the mono channel or the gray-scale image is achieved.

    [0072] In the color channels, the spectral differences are also to be taken into account. For this purpose, additional color normalization matrices 54, 56 are used. Color normalization matrices 54, 56 have the same dimensions X, Z as the brightness matrix 48 but can differ in their resolution, which is then compensated for, for example, by interpolation. FIG. 8 shows an example of a color normalization matrix 54 for red and FIG. 9 shows an example of a color normalization matrix 56 for blue. To obtain these color normalization matrices 54, 56, spectrometer measurements of the illumination module 28 are performed, and then the ratio of the intensity in the respective color red or blue to the intensity over the entire spectrum is formed for each (X, Z) position. Graphically, a color normalization matrix 54, 56 indicates a spatially resolved distribution, which proportion of the total intensity the associated color has. Preferably, color normalization matrices 54, 56 are not determined individually for each illumination module 28, but rather once for a type or a series of illumination modules 28. They are then known independent of production and may optionally be stored in the illumination module 28 or the line camera 10, for example as a table (LUT, look-up table).

    [0073] In a combination step 58, the color normalization matrices 54, 56 are mixed with the brightness matrix 48 for each color channel. In addition, in a simple advantageous implementation, the individual entries can be multiplied with each other, provided that all the matrices 48, 54, 56 are or will be suitably normalized. Alternatively, a more complex combined calculation is performed, which may also include a resolution adjustment of the matrices 48, 54, 56.

    [0074] The respective resulting compensation matrices are then subjected to a color channel refinement 60. For this, the same algorithm may be used as in the mono channel refinement 50, or color-specific properties are taken into account which modify the algorithm for the color channels collectively or even for individual color channels. The result are refined color channel brightness matrices 62, 64 for the blue or red color channels. In doing so, a white adjustment is now also achieved for the color channels and thus the single-color images. The refined brightness matrices 52, 62, 64 only have to be calculated once, for example, during commissioning or for a connection between an illumination module 28 and a line camera 10.

    [0075] In the brightness correction in the mono channel and color channels explained with reference to FIG. 7, the brightness matrix 48 is recorded independent of spectral characteristics, and color-specific adjustments are made by the color normalization matrices 54, 56. Alternatively, it is conceivable to record the brightness matrix 48 directly in different colors and thus store it in the illumination module 28. Then, different brightness matrices 48 are generated for the mono channel and each color channel. The information of the color normalization matrices 54, 56 is already contained therein, and the combination step 58 may be omitted. For this purpose, in particular for the measurement of the illumination module 28, light-receivers or photodiodes with appropriate color filters can be used, instead of photodiodes sensitive to white light, as above. The color normalization is then actually performed individually for the illumination module 28 instead of, up till now, generally for a type or a series.

    [0076] FIGS. 10 to 12 illustrate the result achieved so far of normalized white, red and blue values. The figures are respectively structured in the same way, whereby

    [0077] FIG. 10 illustrates the monochrome channel, FIG. 11 the red channel and FIG. 12 the blue channel. Here, compared to the monochrome channel, the red and blue channels are increased beforehand by a gain factor of about three. In the columns, the distance or Z-direction is varied. The upper line shows a raw image with the line position X on the X-axis and various consecutively recorded lines on the Y-axis. The second line shows the corresponding result in the form of a normalized image. The bottom line shows an average over the image lines of the raw image compared to an average over the image lines of the normalized image. The somewhat lighter drawn line of the normalized image runs at least approximately flat; thus, the normalization has leveled as desired the irregular progressions of the darker line of the raw image.

    [0078] The image data normalized in this way may be used as input data for further color normalization and color reconstruction. FIG. 13 shows firstly an exemplary illumination spectrum of an illumination module 28. A peak 66 for blue and a peak 68 for red are clearly recognizable. The multiple lines result from the fact that the illumination spectrum is distance-dependent due to the dispersion of the optics. FIG. 14 shows complementary exemplary quantum efficiencies of color filters for receiving-pixels 22 with a white characteristic curve 70, a blue characteristic curve 72, a green characteristic curve 74 and a red characteristic curve 76.

    [0079] In a wavelength range of around 480 nm, a local minimum is found in the illumination spectrum of FIG. 13. According to the example in FIG. 14, this is where the transmission window of a green filter would be typically found. Therefore, a blue and a red color channel, which provide similar intensities on a white target, are preferably used and not a green color channel. In this way, the dynamic range is better utilized, and a better signal-to-noise ratio is achieved.

    [0080] When a blue and a red color channel are chosen, image data is determined in two primary colors only. If a representation of the color in RGB values is desired, the missing color green may be reconstructed from a function f(W, R, B) and for the first time from G=3*W-R-B. However, this is still not sufficient for a good color reproduction since the illumination spectrum is inhomogeneous and has a local minimum in the green wavelength range. A certain compensation has been made by the normalizations described above. For a result that is as true to the color as possible, preferably correlations between R, B and W are now determined and used. These are, for example, linear combinations of the form


    R′=x.sub.1*R+x.sub.2*(3*W-R-B)+x.sub.3*B+x.sub.4


    G′=x.sub.5*R+x.sub.6*(3*W-R-B)+x.sub.7*B+x.sub.8


    B′=x.sub.9*R+x.sub.10*(3*W-R-B)+x.sub.11*B+x.sub.12

    [0081] with correlation or weighting factors x.sub.1 . . . x.sub.12

    [0082] The weighting factors x.sub.1 . . . x.sub.12 are empirically determined and are static. For color channels other than blue and red without green, appropriate corrections are possible.

    [0083] The weighting factors allow for a color reproduction despite the local minimum in the green spectrum shown in FIG. 13. To illustrate, one can imagine that the line camera 10 is recording a green target. In the blue channel, as in the red channel, almost no green light is allowed through, and the recorded intensity is close to zero. In the mono channel, see the exemplary white characteristic line 70 in FIG. 14, the little amount of existing green light allowed through, results in an intensity slightly above zero. A high value x.sub.6 in combination with corrected values x.sub.5 and x.sub.7 can reconstruct the green value G′. With an alternative black target, no significant intensity would be detected in any channel, which does not change the factors x.sub.5. x.sub.7 in the equation for G′, so that quite correctly, a green value close to zero is reconstructed. Here, it can be seen that the offset values x.sub.4, x.sub.8, x.sub.12 are reasonably chosen not to be too large or to be even at zero. With a gray target, both color channels give a certain signal and reconstruct a certain green value G′, which in sum results in the RGB color gray as desired.

    [0084] Alternatively or in addition to the presented weighting factors, a neural network, in particular with multiple hidden levels, is used. As an input, a raw or pre-corrected color vector is defined, and the neural network returns a corrected color vector. Such a neural network can be trained, for example, with an additional color sensor that specifies the colors to teach-in in a supervised learning for training-images. In addition, algorithms or neural networks may be used to improve the signal-to-noise behavior by taking into account the color values of the neighboring pixels.