Method for reconstructing a colour image acquired by a sensor covered with a mosaic of colour filters

11057593 ยท 2021-07-06

Assignee

Inventors

Cpc classification

International classification

Abstract

A method for reconstructing a colour image acquired by a photosensitive sensor covered with a mosaic of filters of different colours making up a base pattern, obtaining the product of a demosaicing matrix with a matrix representation of a mosaic image coming from the sensor following acquisition of the colour image by the sensor, the product of the demosaicing matrix with the matrix representation of the mosaic image performing an interpolation of the colour of each pixel of the mosaic image as a function of a pixel neighbourhood of a base pattern corresponding to the base pattern of the mosaic of filters.

Claims

1. A method for reconstructing a colour image acquired by a photosensitive sensor of size HW, covered with a mosaic, of size HW, of P filters of different colours making up a base pattern of size hw such that h<H and w<W, said base pattern of the mosaic of filters being repeated so as to cover the mosaic of filters without overlap between the base patterns, said method being carried out by computer and comprising at least the following steps: a first preliminary step of constructing a demosaicing matrix D starting from a simulation using a first database of N images I.sub.i, i[1, . . . , N], with P colour components, called reference images, for producing a second database of N images J.sub.i, i[1, . . . , N] simulating images produced originating from the sensor, after acquisition by said sensor of the images of the first database, said reference images I.sub.i and images produced J.sub.i being represented respectively in the form of matrices of HW/(hw) vectors y.sub.1, x.sub.1 such that: the components of y.sub.1 are the P components of a reduced neighbourhood of size n.sub.hn.sub.w of one of the HW/(hw) base patterns applied to the reference image I.sub.i, the components of x.sub.1 are the components of a reduced neighbourhood of size n.sub.hn.sub.w of one of the HW/(hw) base patterns applied to the image produced J.sub.i, with n.sub.h>h and n.sub.w>w; and a second step of reconstructing a colour image producing the product of the demosaicing matrix D with a matrix representation of a mosaic image originating from the sensor after acquisition of the colour image by said sensor, said product of the demosaicing matrix D with the matrix representation of the mosaic image using an interpolation of the colour of each pixel of the mosaic image as a function of a neighbourhood, of size n.sub.hn.sub.w pixels, of a base pattern (43) of size hw, corresponding to the base pattern of the mosaic of filters.

2. The method according to claim 1, characterized in that n.sub.hn.sub.w=Phw.

3. The method according to claim 1, characterized in that a first demosaicing matrix D.sub.1 is expressed as an expected value E calculated on the N reference images of the first database: D.sub.1=E.sup.i=1 . . . N{yx.sub.1.sup.T(x.sub.1x.sub.1.sup.T).sup.1}, where y is a matrix of HW/(hw) vectors of size Phw representing the colour image.

4. The method according to claim 3, characterized in that the first demosaicing matrix D.sub.1 is expressed thus: D.sub.1=S.sub.1RM.sub.1.sup.T(M.sub.1RM.sub.1.sup.T).sup.1 with x.sub.1=M.sub.1y.sub.1, M.sub.1 being a matrix of projection of y.sub.1 onto x.sub.1, y=S.sub.1y.sub.1, S.sub.1 being a matrix of reduction of the neighbourhood of the vector y.sub.1 and of transformation of the vector y.sub.1 into y, and R = 1 NPHW E i = 1 .Math. N { y 1 y 1 T } being a correlation matrix of the resolved images of the first database expressed as a function of the reduced neighbourhoods of size n.sub.hn.sub.w.

5. The method according to claim 1, characterized in that the demosaicing matrix is expressed as a function of the spectral sensitivity of the sensor and of the spectral functions of the P filters of different colours.

6. The method according to claim 5, characterized in that, with the first database comprising multi-spectral reflectance images, a second demosaicing matrix D.sub.2 is defined by the expression: D 2 = F T C L S 1 R L 1 T C 1 T F 1 M 1 T ( M 1 F 1 T C 1 L 1 R L 1 T C 1 T F 1 M 1 T ) - 1 , with R = 1 N H W P E i = 1 .Math. N { z 1 z 1 T } , the second demosaicing matrix D.sub.2 being constructed according to the following steps: calculation of a vector matrix y.sub.0 representing an image with P components as a function of a vector matrix z.sub.0 representing a multi-spectral reflectance image obtained from a reflection of an object illuminated by a light source of spectral power density L(), being a spectral component, such that: y.sub.0=F.sub.0.sup.TC.sub.0L.sub.0z.sub.0, where F.sub.0 is a matrix of size P.sub.P of spectral transmission functions of the filters of the mosaic, C.sub.0 is a diagonal matrix of size P.sub.P.sub. of a spectral sensitivity of the sensor, L.sub.0 is a diagonal matrix of size P.sub.P.sub. of the spectral power densities of the light source; construction of the vector matrix y starting from the multi-component image and a vector matrix z, constructed starting from the multi-spectral reflectance image, said vector matrices y and z comprising said base pattern, such that y=F.sup.TCL.sub.z with F=I.sub.hw.Math.F.sub.0, I.sub.hw being an identity matrix of dimensions hwhw, C=I.sub.hw.Math.C.sub.0, and L=I.sub.hw.Math.L.sub.0; construction of a vector matrix z.sub.1 composed of the reduced neighbourhoods of size n.sub.hn.sub.w of the base pattern of the multi-spectral reflectance image such that y.sub.1=F.sub.1.sup.TC.sub.1L.sub.1z.sub.1, y.sub.1 being a matrix composed of the reduced neighbourhoods of size n.sub.hn.sub.w of the base pattern of size hw of the multi-spectral reflectance spectral image, with F.sub.1=I.sub.n.sub.h.sub.w.Math.F.sub.0,I.sub.n.sub.h.sub.n.sub.w being an identity matrix of dimensions n.sub.hn.sub.wn.sub.hn.sub.w, L.sub.1=I.sub.n.sub.h.sub.n.sub.w.Math.L.sub.0 and C.sub.1=I.sub.n.sub.h.sub.n.sub.w.Math.C.sub.0; and construction of the second demosaicing matrix D.sub.2 with M.sub.1 being a matrix of projection of y.sub.1 onto x.sub.1 and S.sub.1 being a matrix of reduction of the neighbourhood of the vector y.sub.1 and of transformation into y.

7. The method according claim 6, characterized in that C.sub.0 is a product of the spectral sensitivities of components of the camera's optical path and the spectral sensitivity of the sensor, said components comprising at least one of the following: an objective of the camera, an infrared filter, a low-pass spatial filter, a microlens system.

8. The method according to claim 1, characterized in that each row of the demosaicing matrix D, D.sub.1, D.sub.2 is expressed as a convolution filter.

9. A data processing system comprising means for carrying out the steps of the method according to claim 1.

10. A non-transitory computer program product comprising instructions which, when the program is executed by a computer, lead the computer to carry out the steps of the method according to claim 1.

11. A method optimizing spectral functions of filters and arranging the filters on a mosaic of P filters of different colours, repeated so as to fill a base pattern of size hw, said base pattern being repeated so as to cover the mosaic of filters, said mosaic of filters being applied to a photosensitive sensor, said method comprises: a multicriteria optimization by minimizing an error of colour rendering and by minimizing a mean square error between an image originating from an ideal sensor, acquired starting from an image of a database of reference images, and an image originating from a sensor covered with the mosaic of P different filters, acquired starting from the same image of the database of reference images, said image acquired by the sensor covered with the mosaic of filters being reconstructed by the method according to claim 1.

12. The method according to claim 11, characterized in that it further comprises minimizing an error of colour rendering between the image originating from the ideal sensor acquired starting from the image of the image database, and an image originating from a sensor covered with a series of P filters, each of the P filters covering the whole of the sensor, said image being acquired starting from the same image of the database.

13. The according to claim 12, characterized in that it further comprises minimizing an error of colour rendering between a Macbeth test chart acquired by a perfect sensor and a Macbeth test chart acquired by a sensor provided with the mosaic of the P filters.

14. A data processing system comprising means for carrying out the steps of the method according to claim 11.

15. A non-transistor computer program product comprising instructions which, when the program is executed by a computer, cause the computer to carry out the steps of the method according to claim 11.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) Other advantages and features of the invention will become apparent on examining the detailed description of several embodiments, which are in no way limitative, and the attached drawings, in which:

(2) FIG. 1a shows a general principle of acquisition of an image by a sensor provided with a mosaic of filters;

(3) FIG. 1b shows an example of a base pattern for a mosaic of filters;

(4) FIG. 2a shows a conversion of an image from a sensor provided with a mosaic of filters, into a vector matrix;

(5) FIG. 2b shows a conversion of an image with several components into a vector matrix;

(6) FIG. 2c shows an example of a calculation matrix of an image at the output of a sensor provided with a Bayer filter mosaic;

(7) FIG. 2d shows an example of interpolation of a colour of a pixel using a sliding neighbourhood according to the state of the art;

(8) FIG. 3 shows an example of interpolation of a colour of a pixel using a neighbourhood according to the invention;

(9) FIG. 4a shows a conversion of a mosaic image at the output of a sensor, into a vector matrix;

(10) FIG. 4b shows a conversion of a multi-component image into a vector matrix;

(11) FIG. 5 shows an example of data acquisition for calculating a demosaicing matrix over a continuous spectral domain;

(12) FIG. 6a shows conversions of a multi-spectral image into several different vector matrices;

(13) FIG. 6b shows a calculation of a vector matrix representing an image produced after acquisition of a multi-spectral image by a sensor combined with a filter;

(14) FIG. 7 shows a principle of transformation of a demosaicing matrix into convolution filters;

(15) FIG. 8 shows an example of data acquisition for optimizing the arrangements and the spectral functions of the filters of a mosaic of filters;

(16) FIG. 9 shows calculations of different images used for optimizing the arrangements and the spectral functions of the filters of the mosaic of filters;

(17) FIG. 10 shows the demosaicing method and device according to the invention;

(18) FIG. 11 shows the method for constructing a demosaicing matrix according to the invention.

DETAILED DESCRIPTION

(19) The present invention relates in particular to the acquisition of a multi-chromatic or multi-component colour image of size HW. In FIG. 1a, the image comprises P colour components. The image with P colour components 1 is acquired by a first photosensitive sensor 2 after the light from an object passes through a first mosaic of filters 3, or matrix, composed of filters 7 of P different colours. Said mosaic of filters 3 is of size HW. The first sensor 2 is defined by a pixel matrix 4 of size HW. Each filter 7 is of the size of one pixel 4 of the first sensor 2 and covers one pixel 4 of the first sensor 2. The mosaic 3 of colour filters 7 is composed starting from the first base pattern 6 of size hw with hwP. The first base pattern 6 comprises hw colours, which may or may not be different. For example, in FIG. 1b the first base pattern 6 comprises two red filters F1, two orange filters F2, two green filters F3, one pink filter F4, one blue filter F5 and one yellow filter F6. The filters F1, F2, F3, F4, F5, F6 of the first base pattern 6 are arranged according to a particular arrangement. In the example shown in FIG. 1b, the first base pattern 6 is in the form of a 33 matrix of filters arranged as follows, from left to right and from top to bottom: F3, F1, F2, F4, F5, F3, F1, F6, F2. In the matrix or mosaic 3 of colour filters, the first base pattern 6 is repeated so as to cover the whole of the mosaic 3 without overlap between the different base patterns 6. The size of the first base pattern 6 is defined in such a way that H is a multiple of h and W is a multiple of w. Other base patterns may be used without departing from the scope of the invention.

(20) Once the image has been acquired by the first sensor 2, it is processed in order to reconstruct the multi-component image. The processing carried out on the raw image leaving the first sensor 2 is a demosaicing process using a demosaicing matrix D.sub.1. The demosaicing matrix D.sub.1 is obtained by the method of least squares applied to the vector matrices x.sub.1, y:
D.sub.1=E.sup.i=1 . . . N{yx.sub.1.sup.T(x.sub.1x.sub.1.sup.T).sup.1}(1003)

(21) In expression (1003), y is a vector matrix as shown in FIG. 2b. The vector matrix y is obtained from a resolved image I.sub.i with P colour components forming part of a database of reference images used for constructing the matrix D.sub.1 by training as described in the state of the art. The resolved image I.sub.i is of dimensions HW and comprises a third base pattern 21 of size hw. The vector matrix y is constructed by taking the P components from a third base pattern 21, i.e. the P components of each pixel of the third base pattern 21. Each vector of y is therefore of size Phw and y comprises HW/(hw) vectors of size Phw for representing all the information contained in a multi-component image I originating from the database of reference images.

(22) Throughout the description, HW is used indiscriminately for the size of a matrix or of an image. In the case of a matrix, Hw represents the number of rows H by the number of columns W of the matrix. In the case of an image, HW represents the dimensions of the image corresponding to a matrix of size HW. In the same way, the size of the first base pattern 6 is of h rows by w columns. In an image, by extension, a portion of the image is defined, the size of which corresponds to the size of the first base pattern 6 in a matrix of the same size as the image. It may thus be said that the base pattern 6 is applied to the image. By analogy, the image portion corresponding to the base pattern, such as the third base pattern 21 shown in FIG. 2b, will also be called base pattern. The size of the base pattern of the image is of the same dimensions as the base pattern of the matrix. By extension, the size of the base pattern of the image is defined as being hw.

(23) In expression (1003), x.sub.1 is constructed as shown in FIG. 4a. Starting from the database of reference images, a simulation makes it possible to obtain a new database of raw images 40 containing data from the first sensor 2, simulated after acquisition of the images of the database of reference images by the first sensor 2. The data from the first sensor 2 form a raw image 40 of size HW which may be decomposed into pixels 41, each pixel 41 corresponding to a response of a photosite of the first

(24) sensor 2 for example, as shown in FIG. 4a. The raw image 40 may be represented as a matrix of size Hw comprising the first base pattern 6 corresponding to the base pattern of the mosaic 3 of filters 7 as shown in FIG. 1a. In the example shown in FIG. 4a, each pixel corresponds to one of the P colour components and therefore to one of the P filters 7. In FIG. 4a, the components of the first base pattern 6 are designated F.sub.1 to F.sub.p. The first base pattern 6 of the mosaic image 40 is therefore of size hw. In FIG. 4a, the base pattern is of size 33 for the example. According to the invention, a fourth pixel neighbourhood 30 is defined with respect to the first base pattern 6. The fourth neighbourhood 30 is of size n.sub.hn.sub.w. The fourth neighbourhood 30 according to the invention is called reduced neighbourhood 30 hereinafter. The size of the reduced neighbourhood 30 is defined according to the invention such that n.sub.h>h and n.sub.w>w. It is possible for example to use a reduced neighbourhood such that n.sub.hn.sub.w=Phw. Advantageously, the reduced neighbourhood thus defined comprises a quantity and a quality of optimum colour information for describing the captured colour image 1.

(25) FIG. 3 shows a use of the reduced neighbourhood 30 according to the invention for carrying out the interpolation calculations of the colours of each pixel of the mosaic image 40. For example, the base pattern used is the fourth base pattern 25 as shown in FIG. 2d according to the state of the art. Thus, a reduced neighbourhood 30 of size 44 is defined for the example shown in FIG. 3 round the fourth base pattern 25. The reduced neighbourhood 30 has for example a size of one pixel more in width and in length than the fourth base pattern 25. The colour of each pixel of the fourth base pattern 25 is calculated by interpolating the colours of the pixels of the reduced neighbourhood 30. When passing from a fourth base pattern 25 to an adjacent fourth base pattern 25 on the mosaic image 40, the reduced neighbourhood 30 is translated on the mosaic image 40 by a number of pixels equivalent to the size of one side of the fourth base pattern 25, i.e. in the present example: two pixels. The neighbourhood 30 is called reduced as one and the same neighbourhood is used for all the pixels of one and the same base pattern: this same neighbourhood may be defined as the neighbourhood of the base pattern.

(26) In general, the reduced neighbourhood 30 is defined such that n.sub.h>h and n.sub.w>w. For example, n.sub.hn.sub.w=Phw may be taken.

(27) The same reduced neighbourhood 30 of size n.sub.hn.sub.w according to the invention is therefore used for interpolating the colour of each pixel of the first base pattern 6 during reconstruction of the captured colour image 1 according to the invention. Thus, to construct the demosaicing matrix, the vector matrix x.sub.1 is constructed in such a way that each vector comprises only the components of the reduced neighbourhoods 30 of each first base pattern 6 making up the mosaic image 40. Each vector of x.sub.1 is therefore of size n.sub.hn.sub.w and the matrix x.sub.1 comprises HW/(hw) vectors of size n.sub.hn.sub.w. Thus, the demosaicing matrix D.sub.1 of expression (1003) is of size Phwn.sub.hn.sub.w. Advantageously, such a matrix is of a smaller size than a matrix according to the state of the art while maintaining good quality in the reconstruction of a colour image. For example, it can be shown experimentally that the image reconstructed with a sliding neighbourhood of size n.sub.hn.sub.w has the same performance as a reduced neighbourhood of size (n.sub.h+h1)(n.sub.w+w1). From a theoretical viewpoint, the two neighbourhoods cover the same domain of the mosaic image. For example, for a Bayer base pattern of size 22 and a sliding neighbourhood of size 33, the same results as with a reduced neighbourhood of size 44 are obtained in terms of performance.

(28) It is also possible to define a vector matrix y.sub.1, as shown in FIG. 4b. The vector matrix y.sub.1 represents the resolved image with P components 42 in the form of a matrix of HW/(hw) vectors of size Pn.sub.hn.sub.w constructed as the matrix y but taking the reduced neighbourhood 30 of the first base pattern 6 instead of the first base pattern 6 for each of the P components of the resolved image 42. Thus, by defining the following operators, it is possible to obtain a simplified version of the demosaicing matrix D.sub.1.

(29) A matrix M.sub.1 of projection of the vector matrix y.sub.1 of the resolved image with P components provided with its reduced neighbourhood 30 on the vector matrix x.sub.1 of the mosaic image 40 provided with its reduced neighbourhood 30 is defined, such that:
x.sub.1=M.sub.1y.sub.1(1004)

(30) A matrix S.sub.1 of reduction of the reduced neighbourhood 30 of the vector matrix y.sub.1 for transforming it into the vector matrix y is defined, such that:
y=S.sub.1y.sub.1(1005)

(31) It is thus possible to define the demosaicing matrix D.sub.1 as follows:
D.sub.1=S.sub.1RM.sub.1.sup.T(M.sub.1RM.sub.1.sup.T).sup.1(1006)

(32) where R is the correlation matrix such that:

(33) R = 1 NPHW E i = 1 .Math. N { y 1 y 1 T } ( 1007 )

(34) Advantageously, using the formulation of D.sub.1 according to expression (1006), it is possible to calculate just once the correlation matrix R of the colour images of the database of reference images, or resolved images, said resolved images being provided with their reduced neighbourhood 30 of size n.sub.hn.sub.w. Thus, it is possible to recalculate in a simple way the demosaicing matrix D.sub.1 by modifying the operators M.sub.1 and S.sub.1 according to the mosaic of filters considered.

(35) An estimate of the error associated with the demosaicing method can be expressed thus:
e=E.sup.i=1 . . . N{tr(({tilde over (y)}y)({tilde over (y)}y).sup.T)}(1008)
therefore
e=tr{D.sub.1M.sub.1RM.sub.1.sup.T+D.sub.1.sup.T+S.sub.1RS.sub.1.sup.TS.sub.1RM.sub.1.sup.TD.sub.1.sup.TD.sub.1M.sub.1RS.sub.1.sup.T}(1009)

(36) where tr is the trace operator.

(37) It is thus possible to evaluate a priori the performance of a particular mosaic of filters for encoding a database of reference images.

(38) The database of reference images makes it possible to calculate R for a given size of reduced neighbourhood n.sub.hn.sub.w.

(39) Defining the first base pattern 6 of the mosaic of filters makes it possible to calculate M.sub.1, S.sub.1 and D.sub.1.

(40) Based on these data, the mean error in the reconstruction of an image, associated with the use of a particular mosaic positioned on the first sensor 2, can be calculated directly from the database of reference images.

(41) In the same way, it is possible to calculate a mean value of a colour difference between the reference images and the resolved images in the manner described below.

(42) A matrix containing the spectral quantum efficiency of the filters is denoted F.sub.QE. The spectral quantum efficiency can be measured with a monochromator, or estimated by an appropriate transformation if the spectral transmission functions of the filters are not known a priori. Measurement of the spectral quantum efficiency is carried out by proceeding with a recording of the data acquired by the first sensor 2 starting from the images corresponding to the quasi-chromatic light produced by the monochromator for each pixel of the base pattern. The levels of the images are then arranged in such a way that the levels of sensitivities associated with the exposure times during the measurement correspond, i.e. the levels of the images are multiplied by a factor depending on the exposure time for harmonizing the sensitivities in a given radiometric unit. To calculate a quantum efficiency of the P different pixels covered with the P different filters over a given wavelength range with a given wavelength spacing, an instrument is used for measuring the transparency of the filters, or the whole optical path, on NA intervals of wavelength. For example, for a range from 380 nm to 780 nm with a spacing of 1 nm, N.sub.=401 is obtained.

(43) A transform of the filter space to the standardized trichromatic CIE 1931-XYZ colour space can be defined as follows:
FtoXYZ=XYZ.sup.TF.sub.QE(F.sub.QE.sup.TF.sub.QE).sup.1(1010)

(44) where F.sub.QE is of size N.sub.P, XYZ is a matrix of size N.sub.3 containing the spectral functions of the filters defined for the CIE 1931-XYZ colour space, and FtoXYZ is of size 3P. FtoXYZ is a transformation matrix allowing a colour image with P components to be converted into an image the colour coordinates of which are expressed in the CIE 1931-XYZ colour space. The CIE 1931-XYZ colour space was defined by the International Commission on Illumination (CIE) in 1931. By extending the size of the transformation matrix FtoXYZ to the size of the vector y, the transformation y.sub.XYZ is obtained, defined by:
y.sub.XYZ=(I.sub.hw.Math.FtoXYZ)y(1011)

(45) in which I.sub.hw is an identity matrix of size hwhw and .Math. represents a Kronecker product.

(46) An approximation of the mean square colour difference E.sup.i=1 . . . N{E.sup.2} on the database of reference images for a given mosaic of filters and a defined size of neighbourhood is given by:

(47) E i = 1 .Math. N { E } = E 2 _ ( 1012 ) with : E 2 _ = tr ( K ( y ~ - y ) ( y ~ - y ) T K T = tr { KD 1 M 1 RM 1 T D 1 T K T + KSRS T K T - KSRM 1 T D 1 T K T - KD 1 M 1 RS T K T } , ( 1013 ) K = I hw .Math. J , ( 1014 ) J = ( 1 3 [ 0 116 0 500 - 500 0 0 200 - 200 ] FtoXYZ ) ( 1015 )

(48) J being an approximation of the transform to the CIE L*a*b* colour space.

(49) A representation in the CIE 1931-XYZ colour space is a linear representation of the visual system. Now, this representation is not satisfactory for predicting the colour differences. For that, therefore, the CIE L*a*b* colour space is used, which makes the colour space uniform so that it is closer to human perception. The CIE L*a*b* colour space was defined by the International Commission on Illumination (CIE) in 1976.

(50) It is thus possible to test and evaluate different mosaics at lower computation cost, in particular because the calculation of R is carried out just once with the reduced neighbourhood 30 for each image of the first reference database, whatever mosaic is tested. It is thus possible to calculate a mean error in the reconstruction of the images of the reference database with a given mosaic.

(51) Advantageously, it is possible to transform an image reconstructed by the demosaicing method according to the invention to any normalized space derived from the CIE 1931-XYZ colour space. For example, an sRGB space (meaning standard Red Green Blue) can be selected, which is a trichromatic colour space defined by standard CIE 61966-2-1 (1999). For example, a transformation to the sRGB colour space may be carried out as follows:
y.sub.SRGB=A(I.sub.hw.Math.FtoXYZ){tilde over (y)}(1016)

(52) Using expression (1002) applied to D.sub.1 and x.sub.1, the following is obtained:

(53) y sRGB = A ( I hw .Math. FtoXYZ ) D 1 x 1 ( 1017 ) with A = I hw .Math. [ 3.2406 - 1.5372 - 0.4986 - 0.9689 1.8758 0.0415 0.0557 - 0.2040 1.0570 ] ( 1018 )

(54) A being a transformation matrix from the CIE 1931-XYZ colour space to the sRGB colour space.

(55) FIG. 5 shows a generalization of the calculation of the demosaicing matrix according to the invention to the entire spectral domain, i.e. to a continuous spectral domain. FIG. 5 shows a light source 50 of spectral power density L() defined as a function of the different spectral components , i.e. of the value of the energy between two given wavelengths. The light source 50 is reflected by a surface of an object 51 represented by a multi-spectral reflectance image R(x,y,) 52. The reflectance represents a modulation of the spectrum of the light source 50 by the object 51 on which it is reflected. The multi-spectral reflectance image 52 is composed of PA images of size HW defined in an orthogonal coordinate system (x,y,). The reflection of the light 50 on the reflecting object or surface 51 gives a radiance image L()R(x,y,).

(56) It is possible to express a multi-component image 54 starting from the multi-spectral reflectance image, by multiplying the multi-spectral reflectance image 52 by the spectral power density L(), to determine a radiance image 53. Then a matrix representing the radiance image 53 is multiplied by the transmission functions of the

(57) filters F.sub.1(), F.sub.2(), . . . , F.sub.P() as a function of the spectral component of the light and of C(). This operation may be carried out in vector form as shown in FIGS. 6a and 6b. C() is defined as the spectral sensitivity of the optical path of the camera without the spectral sensitivity of the mosaic of filters. On its optical path, the camera may comprise an objective, an infrared filter, a low-pass spatial filter, a microlens system, and a third sensor 55. C() may then be defined as the product of the spectral transmission functions of each component of the optical path without the transmission functions of the filters of the mosaic. If only the sensor on the camera's optical path is considered, C() is the sensitivity of the third sensor 55. Alternatively, one or more components may also be considered on the optical path, in addition to the sensor, including: an objective, an infrared filter, a low-pass spatial filter, a microlens system.

(58) The operation that is modelled by determining the demosaicing matrix is the reconstruction of a multi-component image starting from a mosaic image 56 produced by a third sensor 55 on which a second mosaic of filters 57, of spectral function F(), is positioned after acquisition of the radiance image 53. The second mosaic of filters 57 is composed of P filters defined for PA ranges of values in the spectral domain. In general, a spacing of 10 nm is considered for a spectral domain from 400 nm to 700 nm. The third sensor 55 is similar to the first sensor 2 shown in FIG. 1a and the second mosaic of filters 57 is similar to the first mosaic of filters 3 shown in FIG. 1a.

(59) FIG. 6a shows several vector transformations of the multi-spectral reflectance image 52, or several ways of unfolding said multi-spectral reflectance image 52.

(60) A first vector matrix z.sub.0 may be constructed, the vectors of which are spectral components of the multi-spectral reflectance image 52, of size P.sub.. The first vector matrix z.sub.0 is composed of HW different vectors.

(61) A second vector matrix z representing the multi-spectral reflectance image 52 may also be constructed. To construct the second vector matrix z, the pixels are grouped together in groups of size hw, each group corresponding to the positions of the base patterns 6 on the second mosaic of filters 57 for all of the spectral components P. Thus, the second vector matrix z is composed of HW/(hw) vectors of size hwP.

(62) A third vector matrix z.sub.1 may be constructed starting from the multi-spectral reflectance image 52 using a neighbourhood of size n.sub.hn.sub.w around each of the base patterns 6 on the second mosaic of filters 57. The cumulative vector of the spectral components of each of the pixels of the neighbourhood is of dimension n.sub.hn.sub.wP.sub.. The third matrix z.sub.1 is then composed of HW/(hw) different vectors.

(63) FIG. 6b shows a calculation of a multi-component vector matrix y.sub.0 starting from the first vector matrix z.sub.0. The first vector matrix z.sub.0, representing the multi-spectral reflectance image 52, is first multiplied by a diagonal matrix L.sub.0 of dimensions P.sub.P, the components of which are the P.sub. values of the spectral density L() of the light source, the other values of the matrix L.sub.0 being set at zero. Then secondly, the result is multiplied by a diagonal matrix C.sub.0 of dimensions P.sub.P.sub.. The diagonal matrix C.sub.0 comprises, on its diagonal, the components of the spectral sensitivity C() of the optical path of the camera without the spectral transmission function of the mosaic of filters 57. The result is then multiplied by a transpose of a matrix F.sub.0 of P.sub. spectral transmission functions of the P filters F.sub.1(), F.sub.2(), . . . , F.sub.P() of the mosaic of filters, said matrix F.sub.0 being of size P.sub.P. The result is therefore a multi-component vector matrix y.sub.0 of size PHW such that:
y.sub.0=F.sub.0.sup.TC.sub.0L.sub.0z.sub.0(1019)

(64) In the same way as the vector matrix y is constructed starting from the multi-component resolved image I.sub.i, as shown in FIG. 2b, i.e. incorporating the first base pattern 6, as

(65) shown in FIG. 4b, in the row vectors of said matrix, it is possible to construct the second vector matrix z starting from the multi-spectral reflectance image 52 by including the first base pattern 6 in the row vector. In this case, z is of dimensions P.sub.hwHW(hw). It is then possible to transpose expression (1019) to express y as a function of z:
y=F.sup.TCLz(1020)

(66) with the following expressions: F=I.sub.hw.Math.F.sub.0, C=I.sub.hw.Math.C.sub.0, and L=.Math.I.sub.hwL.sub.0 in which I.sub.hw is an identity matrix of size hwhw.

(67) The new demosaicing matrix D.sub.2 applied to a multi-spectral image may then be written as:
D.sub.2=F.sup.TCLS.sub.1RL.sub.1.sup.TC.sub.1.sup.TF.sub.1M.sub.1.sup.T(M.sub.1F.sub.1.sup.TC.sub.1L.sub.1RL.sub.1.sup.TC.sub.1.sup.TF.sub.1M.sub.1.sup.T).sup.1(1021)

(68) such that:

(69) R = 1 NHWP E i = 1 .Math. N { z 1 z 1 T } ( 1022 ) with y 1 = F 1 T C 1 L 1 z 1 ( 1023 ) F 1 = I n h n w .Math. F 0 ( 1024 ) L 1 = I n h n w .Math. L 0 ( 1025 ) C 1 = I n h n w .Math. C 0 ( 1026 ) and .Math. x 1 = M 1 y 1 = M 1 F 1 C 1 L 1 z 1 and y = S 1 y 1

(70) The new demosaicing matrix D.sub.2 thus defined makes it possible in addition to take into account any arrangement of the colours in the mosaic of filters, characterized by M.sub.1, of the transmission functions of the filters F.sub.0 and the spectral density of the light source L.sub.0. Advantageously, the formulation of D.sub.2 takes into account the definition of the reduced neighbourhood according to the invention.

(71) It is also possible to calculate a priori the performance of a particular mosaic of filters, defined by the spatial arrangement of its filters, the spectral transmission functions of its filters and the light source used. For this purpose, it is possible for the equations for calculating the error of estimation associated with the demosaicing (1008), (1009) and the equations for calculating the colour difference (1012), (1013), (1014), (1015) associated with the demosaicing to be adapted to the multi-spectral domain.

(72) The spectral quantum efficiency F.sub.QE of the filters is equivalent to the product of the transmission matrix of the filters F and C, C being the sensitivity of the sensor multiplied by the spectral transmission function of the components of the optical path of the camera without the mosaic of filters. By way of simplification, it is possible to consider only the sensitivity of the sensor for calculating C.

(73) Advantageously, in equation (1021), the multiplication on the left by the expression F.sup.TCLS.sub.1 projects the image reconstructed according to the P colour components. In other words, F.sup.TCLS.sub.1 projects the P.sub. components of the spectral domain on the P colours of the filters of the mosaic.

(74) If the projection of the image reconstructed according to the P colour components is not carried out, then a reconstruction filter of the multi-spectral images is obtained. It is thus possible to estimate a colour spectrum for each pixel of the image reconstructed starting from the mosaic image.

(75) FIG. 7 shows a transformation of a demosaicing matrix, for example D.sub.1 into a convolution filter.

(76) A demosaicing matrix, such as the matrix D.sub.1, makes it possible to reconstruct the vector matrix {tilde over (y)} representing the image reconstructed starting from the vector matrix x.sub.1 representing the mosaic image: {tilde over (y)}=D.sub.1x.sub.1.

(77) Each row of the demosaicing matrix may be regarded as a convolution filter that makes it possible to reconstruct one of the colours of one of the pixels of a fifth base pattern 70, of size hw of the colour image, with P components. The fifth base pattern 70 is encoded as a first column vector 71 of size Phw, starting from a reduced neighbourhood 73, of size n.sub.hn.sub.w, of the base pattern hw comprising said corresponding pixel in the mosaic image. The reduced neighbourhood 73 is encoded as a second column vector 74.

(78) Each row of the demosaicing matrix D.sub.1 can be converted into an equivalent convolution filter 72 that is applied to each pixel of the mosaic image to reconstruct the pixels of the colour image with P components. Advantageously, expressing a demosaicing matrix as a set of convolution filters, each applicable to a pixel of the mosaic image, makes it possible to avoid converting the mosaic image into a vector matrix, which represents an expensive operation in terms of computation time, computation resources and memory. Advantageously, a reconstructed image is thus obtained which is not in the form of a vector matrix that has to be disassembled. It is thus possible to save on computation time, computation resources and memory by dispensing with the operation of transformation of the vector matrix of the reconstructed image into the complete reconstructed image, in which each pixel of the base pattern corresponds to a convolution filter.

(79) Transformation into convolution filters may also apply to the demosaicing matrix D.sub.2 of a multi-spectral mosaic image.

(80) The invention also makes it possible to carry out a method of spectral optimization of the transmission functions of the filters and of the arrangement of the filters on a base pattern of a mosaic.

(81) For this purpose, training of the filter or demosaicing matrix is used, using the method of least squares with a reduced neighbourhood 30 according to the invention. As seen above and in particular in expression (1021), the demosaicing matrix may be expressed by means of the spectral functions of the filters F. By means of the demosaicing matrix, it is possible to calculate the quality criteria MSE (for Mean Square Error) and E of the reconstruction of the image captured by the third sensor 55, into a colour image with P components. It is thus possible to optimize the parameters of the spectral functions of the filters in order to maximize the quality criteria of the image produced.

(82) As shown in FIG. 8, starting from an image acquired according to the principle shown in FIG. 1 and using the method according to the invention, it is possible to reconstruct an image with P colour components. It is also possible, using equation (1016) and those following, to reconstruct an image produced in a standardized trichromatic space such as the CIE 1931-XYZ space.

(83) In FIG. 8, a model of formation of an image proposes a reconstruction of an image, starting from a multi-spectral reflectance image 52, among: a three-component image 80 in the CIE 1931-XYZ trichromatic space, produced by the expression y.sub.XYZ=FtoXYZF.sup.TLC.sub.z starting from the multi-spectral reflectance image 52, each of the three components corresponding to one of the filters X(), Y(), Z(); a multi-component image 54 with P components each corresponding to the output of one of the filters F.sub.1(), F.sub.2(), . . . , F.sub.P(); a mosaic image 56 corresponding to the image acquired by the third sensor 55 directly via a third mosaic of filters 83 of spectral function F() in the form of a vector matrix, the vectors of which are the spectral functions of the filters F.sub.1(1), F.sub.2(1), . . . , F.sub.P(1).

(84) The radiance image 53 can be measured by a first theoretical ideal imaging system, which comprises a system for dividing the luminous flux into three components. Each of the components passes through filters X(), Y(), Z() and is then measured by a fourth ideal sensor 81. C() is the spectral sensitivity of the optical path of the camera including the sensor, but without the mosaic of filters. Such an ideal imaging system directly captures the coordinates X, Y, Z of each pixel in the CIE 1931-XYZ colour space and directly produces a so-called ideal colour image 80, with three components in the CIE 1931-XYZ colour space.

(85) A second imaging system may be composed of P colour filters, the transmission function of which is given by F.sub.i=1 . . . P(). The light passes through each of these P filters, and it is then measured by a third sensor 55. The third sensor 55 produces a multi-component image 54 with P colour components, each corresponding to one of the P filters F.sub.i(). The spectral sensitivity of the optical path of the camera including the third sensor 55, but without the mosaic of filters,

(86) is C(). Alternatively, it can be considered that C() is solely the spectral sensitivity of the sensor.

(87) A third imaging system is the one that it is attempted to model. It is composed of a second matrix, or mosaic, of filters 83 of P different colours arranged on the first base pattern 6 and duplicated in the size of the third sensor 55. The third imaging system also comprises the third sensor 55.

(88) Starting from the first vector matrix z.sub.0, a vector matrix y.sub.XYZ may be constructed corresponding to the ideal image of components XYZ such that:
y.sub.XYZ=XYZ.sup.TL.sub.0z.sub.0(1027)

(89) with, for example:

(90) XYZ = [ X ( 380 ) Y ( 380 ) Z ( 380 ) X ( 381 ) Y ( 381 ) Z ( 381 ) .Math. .Math. .Math. X ( 780 ) Y ( 780 ) Z ( 780 ) ] ( 1028 ) L 0 = [ L ( 380 ) 0 .Math. 0 0 L ( 381 ) .Math. .Math. .Math. .Math. .Math. 0 .Math. .Math. C ( 780 ) ] ( 1029 )

(91) There is also:

(92) C 0 = [ C ( 380 ) 0 .Math. 0 0 C ( 381 ) .Math. .Math. .Math. .Math. .Math. 0 .Math. .Math. C ( 780 ) ] ( 1030 )

(93) The matrix y.sub.XYZ is of size 3HW and can be transformed into an image of size HW3 comprising the three components XYZ for each pixel of the reconstructed image.

(94) Starting from the third matrix z.sub.1 of size n.sub.hn.sub.wP.sub.HW(hw), the demosaicing matrix D.sub.2 can be written according to expression (1021).

(95) This manner of writing D.sub.2 makes it possible to calculate once for the whole correlation matrix R for all the images of the database of reference images. This makes it possible to calculate the demosaicing matrix D.sub.1 for mosaics of filters defined a posteriori and encoded in the matrix F.sub.1 of the spectral functions of the filters. Thus, the model of D.sub.2 according to expression (1031) advantageously allows a direct expression of the demosaicing operator or matrix D.sub.2 as a function of the data of the problem that is to be solved, namely an optimization of the spectral functions of the filters of the mosaic of filters. It is thus possible to obtain F.sub.1 so as to optimize the quality of the reconstruction of the images acquired.

(96) The optimization criteria used are the quality criteria of image reconstruction: spatial MSE and colorimetric E.

(97) The optimization criteria are calculated between an ideal image 80, produced by a fourth ideal sensor 81 starting from the multi-spectral reflectance image 52, and a mosaic image 56 acquired via the third mosaic of filters 83, demosaiced, i.e. dematrixed, and converted into a colour space allowing the calculation of the optimization criteria.

(98) To calculate the optimization criteria, it is possible to adopt a position in the sRGB space or any other related space such as AdobeRGB. To adopt a position in the sRGB space, the operations described above are carried out, i.e. conversion into the standardized trichromatic CIE 1931-XYZ colour space using equations (1010) and (1011) and then conversion into the sRGB space by means of equations (1016), (1017), (1018).

(99) It is also possible to use another colour space. For example, a colour space that can be expressed on the basis of the XYZ space can be obtained by modifying matrix A. The other spaces require replacing FtoXYZ. For example, if a destination space is called ABC, it is necessary to estimate FtoABC by replacing XYZ with ABC in expression (1010).

(100) It is also possible to calculate a criterion of the MSE type or a criterion of the PSNR type. PSNR (Peak To Noise Ratio) is a measure of colour distortion of the image. For two multi-component images expressed in the sRGB space, I(x,y,c) and K(x,y,c), normalized between 0 and 1, where c is one of the three colour components, the criterion MSE is calculated as:

(101) M S E = 1 HWP .Math. x = 1 .Math. H .Math. y = 1 .Math. W .Math. c = 1 .Math. P ( I ( x , y , c ) - K ( x , y c ) ) 2 ( 1031 )

(102) The criterion PNSR is calculated as follows:

(103) 0 PSNR = 10 log 10 1 MSE ( 1032 )

(104) The calculation of LE is carried out by approximating the non-linear calculation by a piecewise linear function in the following way, for two images I and K expressed in the XYZ reference system:

(105) E = 1 HW .Math. x = 1 .Math. H .Math. y = 1 .Math. W ( I ( x , y , c ) - K ( x , y , c ) ) ( 1033 )

(106) in which:

(107) K = 1 3 [ 0 116 0 500 - 500 0 0 200 - 200 ] diag ( XYZ , diag ( XYZ T L ) ) - 1 ( 1034 )

(108) where diag is a function that places a vector in the diagonal of a matrix.

(109) FIG. 9 shows the different operations carried out on the reflectance image R(x,y,).

(110) As shown in FIG. 9, three different images are produced: a first three-component image 103 in the CIE 1931-XYZ colour space such as the ideal colour image 80; a second three-component image 104 originating from the third sensor 55 on which three colour filters are positioned, such as the multi-component image 54; a third three-component image 105 originating from the third sensor 55 on which a third mosaic of filters 83 is positioned, corresponding to the mosaic image 56.

(111) Each image is from one of three different processing routes 100, 101, 102 at the output of the third and fourth sensors 55, 81.

(112) A first route 100 uses a first processing of a second processing route 101 that consists of transforming the multispectral reflectance image R(x,y,) into a matrix z.sub.o of HW vectors of size PA as shown in FIG. 6a. It is also possible to start from the radiance image L()R(x,y,).

(113) Then a first processing of the first processing route 100 is a projection of the matrix z.sub.0 into the XYZ space as described by equation (1027). A vector matrix z.sub.XYZ of size 3HW is obtained.

(114) Then the matrix z.sub.XYZ is projected into the sRGB space by applying to it matrix A as described in equations (1017) and (1018). A new matrix Z.sub.sRGB of size 3HW is obtained. The matrix z.sub.sRGB may then be unfolded to reconstitute the first three-component image 103 of size HW in the sRGB colour space.

(115) The second route 101 carries out, on the vector z.sub.0, the operation shown in FIG. 6b, to obtain the vector matrix y.sub.0 according to equation (1019). The vector matrix y.sub.0 is then transformed, as shown in FIG. 2b, into a matrix of HW/(hw) vectors of dimension Phw. Then the vector matrix y.sub.0 is projected into the XYZ space by the function FtoXYZ as explained by relationship (1011). A matrix of three vectors y.sub.0XYZ of size HW is thus obtained. Then, the matrix y.sub.0XYZ is projected into the sRGB space by applying to it matrix A as described in equations (1017) and (1018). A new matrix y.sub.0sRGB of size 3HW is obtained. The matrix y.sub.0sRGB may then be unfolded to reconstitute the second three-component image 104 of size HW in the sRGB colour space.

(116) A third route 102, starting from the multispectral reflectance image R(x,y,) acquired by the third sensor 55 covered with the third mosaic of filters 83 gives a matrix z.sub.i as shown in FIG. 6a of size P.sub.n.sub.hn.sub.wHW(hw). Starting from the matrix z.sub.1, a matrix x.sub.1 of size n.sub.hn.sub.wHW/(hw) can be obtained such that x.sub.1=M.sub.1F.sub.1.sup.TC.sub.1L.sub.1z.sub.1 according to relationship (1023). Then, by applying a demosaicing matrix D.sub.0 in the form of the demosaicing matrix D.sub.2 described in expression (1021), {tilde over (y)}.sub.1 of size P.sub.n.sub.hn.sub.wHW/(hw) such that {tilde over (y)}.sub.1=D.sub.0x.sub.1 is obtained. The transformation FtoXYZ is then applied to {tilde over (y)}.sub.1 to obtain the resultant image in the XYZ space according to equation (1011): {tilde over (y)}.sub.XYZ=(I.sub.hw.Math.FtoXYZ){tilde over (y)}.sub.1 of size 3HW. Finally, by applying matrix A as defined by expression (1018), {tilde over (y)}.sub.sRGB is obtained such that {tilde over (y)}.sub.sRGB=A(I.sub.hw.Math.FtoXYZ){tilde over (y)}.sub.XYZ, of size 3HW. The matrix {tilde over (y)}.sub.sRGB obtained is then unfolded in order to reconstruct the third three-component image 105 in the sRGB colour space.

(117) Starting from the different vector matrices shown in FIG. 9, several different E can be calculated: E.sub.1, E.sub.2, E.sub.3 and an MSE, which it is attempted to minimize to optimize the spectral functions and

(118) their distribution on the third mosaic of filters 83. The calculations of the different E are carried out according to formulae (1012), (1013), (1014) and (1015).

(119) A first E.sub.1 is calculated between the image contained in the matrix z.sub.XYZ produced directly starting from the multi-spectral reflectance image R(x,y,) and the image acquired directly via filters F.sub.1, F.sub.2, F.sub.3 and then passed through the conversion to the XYZ colour space: y.sub.0XYZ.

(120) A second E.sub.2 is calculated between the image contained in the matrix z.sub.XYZ and the image expressed in the XYZ colour space after acquisition by means of the third mosaic of filters 83: {tilde over (y)}.sub.XYZ.

(121) A third E.sub.3 is a purely spectral criterion not using reconstruction of an image for testing the colour rendering ability of the filters. To calculate the third E.sub.3, the spectra of a Macbeth test chart or colour chart comprising 24 colours are used. Alternatively, it is also possible to use any other spectral reference base. Starting from the multi-spectral reflectance data of the 24 squares of the Macbeth test chart, on the one hand the transformation into the XYZ colour space is applied directly to obtain a fourth image and on the other hand the filters F.sub.1, F.sub.2, F.sub.3 and the sensitivity of the optical path of the camera and then conversion of the image at the output of the third sensor 55 into the XYZ colour space are applied to obtain a fifth image. The fourth and fifth images are used for calculating the third E.sub.3.

(122) In their turn, MSE and PNSR are calculated between the first image 103 produced directly starting from the multi-spectral reflectance image and the third image produced starting from the multi-spectral reflectance image after it passes through the mosaic matrix 83 and the third sensor 55.

(123) FIG. 10 shows the demosaicing device and method according to the invention.

(124) A first step 100 is a step of acquisition of a colour image 101 by an image acquisition device 102. The image acquisition device 102 comprises a sensor 2, 55 as shown in

(125) FIGS. 1a, 5 and 8. The image acquisition device 102 further comprises a mosaic filter 3, 57, 83, or mosaic of filters, in the form of a matrix of HW filters as shown in FIGS. 1a, 5 and 8. The mosaic of filters 3, 57, 83 is applied to the sensor. The device also comprises an optical device that causes the light captured by the camera to converge towards the sensor 2, 55 equipped with the mosaic filter 3, 57, 83. Thus, before reaching a cell of the sensor 2, 55, the light passes through one of the colour filters 7 of the mosaic of filters 3, 57, 83. A raw signal 103 originating from the sensor 2, 55 can be represented in the form of a mosaic image 40 of size HW as shown in FIG. 4a. The mosaic image 40 is then transmitted to a first calculator or computer 104, which carries out a demosaicing operation 105 according to the invention. The demosaicing operation consists of multiplying the mosaic image 40 represented in matrix form x.sub.1 as shown in FIG. 4a by a demosaicing matrix D.sub.0, D.sub.1, D.sub.2. The result of the demosaicing operation 105 is a vector matrix, which is transformed into an image produced with several colour components according to the inverse process to that shown in FIG. 2b, which consists of unfolding the vector matrix to represent it in the form of an image with several components. The reconstructed image 106 can then be transmitted to a suitable means for exploitation thereof, such as a display 107.

(126) FIG. 11 shows a method and its device for constructing a demosaicing matrix.

(127) The device for constructing a demosaicing matrix comprises a first database 110. The first database 110 comprises so-called reference images. The reference images may be multi-component images, or multi-spectral images. The reference images may also be images outside of the visible spectrum. Starting from the first database 110, a first step of constructing a demosaicing matrix is a step of modelling mosaic images from a sensor covered with a mosaic of filters. The modelling is carried out by a simulation application 111 that implements a model of the sensor 2, 55. The simulation application 111 may be executed by one or more processors of a second calculator, or computer 112. Alternatively, the simulation application 111 may be executed by the first calculator 104.

(128) Starting from each reference image, the simulation application 111 produces a mosaic image, which will populate a second database 114 of so-called mosaic images.

(129) Once the second database 114 has been constructed, the second step 115 of the method for constructing the demosaicing matrix can be implemented. The method for constructing the demosaicing matrix can be carried out by a computer program, which is executed on one or more processors of the second calculator 112. Alternatively, the method for constructing the demosaicing matrix can be carried out on a third calculator, or computer (not shown), or else the first calculator 104. The method for constructing a demosaicing matrix uses the images of the two databases 110, 114. The first database 110 is used in order to produce a mosaic image starting from a reference image by applying to it a demosaicing matrix under test, i.e. using the demosaicing method according to the invention. The mosaic image thus produced is then compared with the mosaic image corresponding to the reference image in the second database 114. This comparison consists of calculating an error between the mosaic matrix produced by simulation and the mosaic matrix constructed by application of the demosaicing matrix under test. The method for constructing the demosaicing matrix is an iterative method: if the error of construction of the mosaic matrix is below a threshold, the method stops. If otherwise, a new demosaicing matrix is determined and tested.

(130) The different embodiments of the present invention comprise various steps. These steps may be carried out by machine instructions executable by means of a microprocessor, for example.

(131) Alternatively, these steps may be carried out by specific integrated circuits comprising hard-wired logic for executing the steps, or by any combination of programmable components and personalized components.

(132) The present invention may also be supplied in the form of a computer program product, which may comprise a non-transitory computer storage medium containing instructions executable on a data processing machine, these instructions being usable for programming a computer (or any other electronic device) for executing the methods.