Method for increasing the spatial resolution of a multispectral image from a panchromatic image
11416968 · 2022-08-16
Assignee
Inventors
Cpc classification
International classification
G06T3/40
PHYSICS
Abstract
A method for increasing spatial resolution of a MS image using a PAN image. For a portion of the scene, values of parameters of a scene model are obtained according to a resemblance between a simulated MS reflectance and the MS reflectance. A relative variation in the simulated MS reflectance is determined with respect to a simulated PAN reflectance near the values of parameters obtained. A difference between the PAN reflectance and a reflectance of a PAN image with reduced spatial resolution is estimated. An MS image with increased spatial resolution is determined, by adding to the MS reflectance a correction corresponding to a product of this difference and this relative variation. A corresponding image-processing system is also provided.
Claims
1. A method for increasing a spatial resolution of a multispectral (MS) image using a panchromatic (PAN) image having a spatial resolution greater than the spatial resolution of the MS image, the MS image comprising pixels representative of a MS reflectance of a scene, the PAN image comprising pixels representative of a PAN reflectance of said scene, comprising, for at least a portion of said scene: obtaining a set of values of parameters of a scene model, said scene model simulating a reflectance of the portion of said scene in bands of wavelengths corresponding to the MS image and to the PAN image to respectively provide simulated MS reflectance and simulated PAN reflectance, according to hypotheses associated with parameters on the portion of said scene, said set of values of parameters being obtained according to a resemblance between the simulated MS reflectance and the MS reflectance for the portion of said scene; determining a relative variation in the simulated MS reflectance with respect to the simulated PAN reflectance near said set of values of parameters; estimating a difference between the PAN reflectance and a reflectance of a PAN image with reduced spatial resolution, the difference being referred to as a high spatial resolution PAN modulation; and determining an MS image with increased spatial resolution, by adding to the MS reflectance a correction corresponding to a product of said high spatial resolution PAN modulation and said relative variation.
2. The method of claim 1, wherein in estimating said high spatial resolution PAN modulation, the PAN image with reduced spatial resolution has a spatial resolution corresponding to the spatial resolution of the MS image.
3. The method of claim 1, wherein, for the portion of said scene, obtaining said set of values of parameters comprises an optimized parameterisation of said scene model with respect to the resemblance between the simulated MS reflectance and the MS reflectance.
4. The method of claim 1, wherein the determination of the MS image with increased spatial resolution comprises: a spatial oversampling of the MS image so as to obtain an oversampled MS image; low-pass filtering of the PAN image so as to obtain a PAN image with reduced spatial resolution; and correction of the oversampled MS image in the portion of said scene according to said relative variation and said high spatial resolution PAN modulation.
5. The method of claim 1, wherein the determination of said relative variation in the simulated MS reflectance with respect to the simulated PAN reflectance comprises: determining a gradient of values of parameters of said scene model with respect to the simulated PAN reflectance near a reference simulated PAN reflectance corresponding to said set of values of parameters; determining a variation in the simulated MS reflectance according to values of parameters near said set of values of parameters; and composing said variation in the simulated MS reflectance and said gradient of values of parameters, providing said relative variation in the simulated MS reflectance with respect to the simulated PAN reflectance.
6. The method of claim 5, wherein, for the portion of said scene, the determination of said gradient comprises an optimized parameterisation of said scene model with respect to a resemblance between the simulated PAN reflectance and a vicinity of the reference simulated PAN reflectance.
7. The method of claim 6, wherein said optimized parameterisation comprises, for the portion of said scene considered, an optimization of a cost function comprising: a reflectance function representative of a resemblance between the vicinity of the reference simulated PAN reflectance and the simulated PAN reflectance for the values of parameters considered; and a function of a priori knowledge representative of a resemblance between the values of parameters considered and an a priori knowledge of parameters of the scene model.
8. The method of claim 7, wherein the a priori knowledge of the parameters of the scene model is a function of said set of values of parameters.
9. The method of claim 1, wherein the portion of said scene corresponds to a pixel and said relative variation is determined for each pixel.
10. The method of claim 1, wherein pixels being classified into groups of pixels and the portion of said scene corresponding to one of said groups of pixels, said relative variation is determined for each group of pixels.
11. The method of claim 10, wherein said relative variation of a group of pixels is determined according to a median value or an average value of the MS reflectances of the group of pixels considered.
12. The method of claim 1, wherein the scene model comprises a model of reflectance on the ground and an atmospheric model.
13. The method of claim 1, further comprising a previous conversion of the values of the pixels of the MS image and the values of the PAN image respectively into values of MS reflectance and values of PAN reflectance on the ground or at a top of the atmosphere.
14. A computer program product recorded on a non-transitory media executable by a processor, comprising a set of program code instructions to implement a method for increasing spatial resolution of claim 1.
15. An image processing system to process image by increasing a spatial resolution of a multispectral (MS) image using a panchromatic (PAN) image having a spatial resolution greater than the spatial resolution of the MS image, the MS image comprising pixels representative of a MS reflectance of a scene, the PAN image comprising pixels representative of a PAN reflectance of said scene, the image processing system comprises at least one processor configured, for at least a portion of said scene, to: obtain a set of values of parameters of a scene model, said scene model simulating a reflectance of the portion of said scene in bands of wavelengths corresponding to the MS image and to the PAN image to respectively provide simulated MS reflectance and simulated PAN reflectance, according to hypotheses associated with parameters on the portion of said scene, said set of values of parameters being obtained according to a resemblance between the simulated MS reflectance and the MS reflectance for the portion of said scene; determine a relative variation in the simulated MS reflectance with respect to the simulated PAN reflectance near said set of values of parameters; estimate a difference between the PAN reflectance and a reflectance of a PAN image with reduced spatial resolution, the difference being referred to a high spatial resolution PAN modulation; determine an MS image with increased spatial resolution, by adding to the MS reflectance a correction corresponding to a product of said high spatial resolution PAN modulation and said relative variation; and wherein said at least one processor is configured to implement the method for increasing the spatial resolution of claim 1.
Description
PRESENTATION OF THE DRAWINGS
(1) The invention will be better understood upon reading the following description, given as an example that is in no way limiting, and made in reference to the drawings which show:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9) In these drawings, references identical from one drawing to another designate identical or analogous elements. For reasons of clarity, the elements shown are not to scale, unless otherwise mentioned.
DETAILED DESCRIPTION OF EMBODIMENTS
(10) In the rest of the description, in a non-limiting manner, the case is considered of a processing of images acquired from a spacecraft of the satellite type. It should be specified, however, that the invention also applies to images acquired by an optical observation instrument on board an aircraft (airplane, balloon, drone, etc.), for example a high-altitude aircraft (altitude greater than 10 kilometres).
(11) Moreover, in the rest of the description, in a non-limiting manner, the case is considered in which the optical observation instrument is adapted to simultaneously acquire a multispectral image, called “MS image”, and a panchromatic image, called “PAN image”, of the same scene over which said satellite flies. It should be specified, however, that the invention also applies to MS and PAN images acquired by two different optical observation instruments, which can for example be on board the same satellite or different satellites (or even in different craft, respectively spacecraft and aircraft). Consequently, the invention also applies to the case of MS and PAN images acquired at different times, as long as said images are comparable in that, in particular, they represent substantially the same scene observed with substantially the same line of sight, and preferably with substantially the same sunshine conditions.
(12)
(13) Conventionally, the satellite 10 carries an optical observation instrument 11 that allows to acquire an MS image and a PAN image of the scene observed.
(14) The MS image is in practice itself formed by a number N.sub.j of elementary images (N.sub.j≥2), corresponding to the radiation received in different respective bands of wavelengths. For example, an MS image can consist of four elementary images (N.sub.j=4): an elementary image in the band of the red wavelengths, called “R band” (for example [625-695] nanometres), an elementary image in the band of the green wavelengths, called “G band” (for example [530-590] nanometres), an elementary image in the band of the blue wavelengths, called “B band” (for example [450-520] nanometres), and an elementary image in the band of the near-infrared wavelengths, called “NIR band” (for example [760-890] nanometres).
(15) The PAN image corresponds to the radiation received in a band of wavelengths for example wider than that of the elementary images of the MS image, which covers for example directly all the visible wavelengths. For example, the PAN image corresponds to the radiation received in a band of wavelengths of [450-745] nanometres.
(16) The PAN image has a spatial resolution higher than each of the elementary images of the MS image, as well as a spatial sampling distance smaller than each of said elementary images of the MS image, so that a pixel in the PAN image represents a smaller surface area of the scene that a pixel of an elementary image of the MS image. Conventionally, the spatial resolution of an image corresponds to the size, for example expressed in metres, of the smallest object that can be detected in the scene represented by this image. The smaller the size of the smallest detectable object, the greater the spatial resolution of this image. The spatial sampling distance corresponds to the distance on the ground, for example expressed in metres, separating two adjacent pixels of the image.
(17) Once the MS image and the PAN image have been acquired by the optical observation instrument 11 of the satellite 10, said MS and PAN images are memorised to be transmitted to a ground station 20, when the satellite 10 flies over said ground station 20. Once transmitted to a ground station 20, the MS image and the PAN image are subjected to various processing that is considered to be known to a person skilled in the art. This processing includes in particular the previous geometric correction of said MS and PAN images, for example to make them consistent with the same predetermined referencing system. The MS image and the PAN image are then provided to a processing device (not shown in the drawings) that can carry out the time-delayed processing aiming to increase the spatial resolution of the MS image using the PAN image, by implementing a method 50 for increasing spatial resolution.
(18) The processing device includes for example a processing circuit including one or more processors and memorisation means (magnetic hard disk, electronic memory, optical disk, etc.) in which data and a computer program product, in the form of a set of program code instructions to be executed to implement all or a part of the steps of the method 50 for increasing spatial resolution, are memorised. Alternatively or in addition, the processing circuit includes one or more programmable logic circuits (FPGA, PLD, etc.), and/or one or more specialised integrated circuits (ASIC), and/or a set of discrete electronic components, etc. adapted to implement all or a part of the steps of the method 50 for increasing spatial resolution.
(19) In other words, the processing circuits corresponds to a set of means configured in a software (specific computer program product) and/or hardware (FPGA, PLD, ASIC, etc.) manner to implement the various steps of the method 50 for increasing spatial resolution.
(20) The processing device can include functionalities in the same apparatus or in several acting in cooperation. It can further be provided for processing on the ground, onboard processing, or a combination of the two processing modes.
(21) The MS image consists of pixels representative of the multispectral reflectance of the scene observed in each of the N.sub.j bands of wavelengths considered, called “MS reflectance”. The PAN image consists of pixels representative of the panchromatic reflectance of said scene, called “PAN reflectance”.
(22) The MS and PAN reflectances are preferably reflectances on the ground (corrected from the effects of the atmosphere, at least their predictive part—Rayleigh correction) or at the Top of Atmosphere. For this purpose, the method 50 for increasing spatial resolution can include, in preferred embodiments, a previous step (not shown in the drawings) of converting the values of the pixels of the MS image and of the PAN image into values of MS reflectance and PAN reflectance on the ground or at the top of the atmosphere, if this conversion has not already been carried out by other means. Such a conversion is considered to be known to a person skilled in the art.
(23)
(24) The above steps (51 determining optimal MS values, 52 determining an optimisation gradient and 53 determining an injection vector) are executed for a portion of the scene, and are repeated for each scene portion if the spatial resolution of the MS image must be increased in several portions of observed scene.
(25) According to a first example, the scene portion corresponds to a pixel in high spatial resolution (that is to say the spatial resolution of the PAN image), so that the above steps are executed for each pixel in high spatial resolution considered. In other words, in such a case, an injection vector is calculated for each pixel in high spatial resolution considered, and preferably for all the pixels in high spatial resolution of the PAN image. Alternatively, the scene portion can correspond to a pixel in low spatial resolution (that is to say the spatial resolution of the MS image), so that the above steps are executed for each pixel in low spatial resolution considered. In other words, in such a case, an injection vector is calculated for each pixel in low spatial resolution considered, and preferably for all the pixels in low spatial resolution of the MS image, and an injection vector for each pixel in high spatial resolution can be obtained by oversampling the injection vectors obtained for the pixels in low spatial resolution.
(26) According to a second example, the scene portion corresponds to a group of pixels in high spatial resolution (that is to say at the spatial resolution of the PAN image), so that the above steps are executed for each group of pixels in high spatial resolution considered. In other words, in such a case, an injection vector is calculated for each group of pixels in high spatial resolution considered, and preferably so as to cover all the pixels in high spatial resolution of the PAN image. Alternatively, the scene portion can correspond to a group of pixels in low spatial resolution (that is to say at the spatial resolution of the MS image), so that the above steps are executed for each group of pixels in low spatial resolution considered. In other words, in such a case, an injection vector is calculated for each group of pixels in low spatial resolution considered, and preferably so as to cover all the pixels in low spatial resolution of the MS image, and an injection vector for each pixel in high spatial resolution can be obtained by oversampling the injection vectors obtained for the pixels in low spatial resolution.
(27) The method 50 for increasing spatial resolution of the MS image further includes a step 54 of calculating an MS image with increased spatial resolution according to the MS image, the PAN image and the injection vector(s).
(28) As indicated above, the method 50 for increasing spatial resolution uses a scene model, and possible examples of scene models are described in more detail below. Moreover, detailed embodiments of the steps illustrated by
A) Scene Model
(29) As indicated above, the method 50 for increasing spatial resolution uses a scene model. Such scene models are considered to be known to a person skilled in the art, and the choice of a particular scene model constitutes an alternative embodiment.
(30) The scene model advantageously includes at least one model of ground reflectance that models the intrinsic reflectance of the scene observed. In preferred embodiments, the scene model further includes an atmospheric model that models the transfer function of the atmosphere between the scene and the satellite 10, and more particularly between the scene and the top of the atmosphere.
(31) The reader can for example refer to the document WO 2018/210647 (inventor Hervé Poilvé), which describes various types of models of ground reflectance and of parameterised atmospheric models.
(32) In the rest of the description, in a non-limiting manner, the case is considered in which the scene model includes both a ground reflectance model and an atmospheric model.
(33) A.1) Model of Ground Reflectance
(34) The model of ground reflectance is for example based on a SAIL/PROSPECT model, which models in particular the reflectance of scenes corresponding to plant cover, the plants being the main contributor in the majority of the scenes observed from a satellite 10.
(35) The SAIL/PROSPECT model, also known by the name of PROSAIL model in the scientific literature, takes into account, conventionally, the direction of illumination of the scene by the sun as well as the look direction of the optical observation instrument (BRDF model, for Bidirectional Reflectance Distribution Function).
(36) The SAIL/PROSPECT model has been exhaustively validated and is routinely used by the scientific community. Examples include the scientific publication: “PROSPECT+SAIL Models: A Review of Use for Vegetation Characterization” by S. Jacquemoud, W. Verhoef, F. Baret, C. Bacour, P. J. Zarco-Tejada, G. P. Asner, C. Francois and S. L. Ustin, Remote Sensing of Environment 113, pp. S56-S66.
(37) It is also possible to enrich the model of ground reflectance for example via: an a priori knowledge of the cover observed and of a predetermined range of plant density (ranging for example from tropical forest to semi-arid region), a water component, which implements for example a model of radiative transfer using the same formalism as the SAIL model and the optical properties of the water as characterised in potentially shallow and turbid waters, called waters of the CASE II type (see for example “Variations in the Light Absorption Coefficients of Phytoplankton, Nonalgal Particles, and Dissolved Organic Matter in Coastal Waters Around Europe”, Babin et al., Journal of Geophysical Research, 108, 3211), if it is known a priori that a strong presence of water is possible in the scene observed (for example coastal zone, lakes, etc.), a predetermined spectral response of the ground, selected according to an a priori knowledge of the type of ground observed in the scene when the contribution of the ground is liable to be significant with respect to the contribution of the plant cover (mountain, desert, etc.), a modelling of other characteristics liable to influence the reflectance of the scene observed, for example such as a significant presence of burned zones, snow-covered zones, artificial surfaces having a predetermined spectral signature, etc.
A.2) Atmospheric Model
(38) If necessary, the atmospheric model includes for example a model of the LOWTRAN type (see for example “Users Guide to LOWTRAN 7”, F. X. Kneisys et al., 1988, Air Force Geophysics Lab Hanscom AFB MA) and, preferably, a cloud model.
(39) For a model of the LOWTRAN type, the guiding parameter is generally the visibility distance, in relation to the load of aerosols. The optical properties of aerosols can be deduced from the call of the LOWTRAN model, by comparison of the results provided by said LOWTRAN model while considering on the one hand an absence of aerosols (maximum visibility) and, on the other hand, a particular type of aerosol and a reference value of the visibility distance. Thus, it is possible to establish relationships (look-up tables) between the visibility-distance parameter of the LOWTRAN model and the optical thickness of the layer of aerosols, and to use said visibility distance as a parameter of said aerosol model.
(40) The cloud model is for example modelled as a layer of turbid medium with a Henyey-Greenstein phase function and a parameter of asymmetry adapted to the respective behaviours of aerosols and of clouds. For the cloud model, the transfer functions are for example expressed according to the 4-flux formalism as developed in the SAIL model.
(41) The optical properties of clouds are well known and described in the literature, and can be used to parameterise the cloud model, and to establish a relationship between the optical thickness of a cloud and the apparent reflectance of said cloud, for example to use the cloud optical thickness as a parameter of the cloud model.
(42) A.3) Selection of the Scene Model
(43) In order to be able to process images of scenes located at different locations on the surface of the Earth, it is possible, in specific embodiments, to memorise in a database a library of scene models. Each scene model memorised in this database corresponds to a particular combination of a model of ground reflectance and an atmospheric model adapted to a particular combination of type of landscape and climatic conditions.
(44) By classifying various zones on the surface of the Earth according to their type of landscape and their climatic conditions, it is possible to establish one or more geographic maps allowing to select, for each zone on the surface of the Earth, the scene model most adapted to the type of landscape and to the climatic conditions encountered in this zone.
(45) Thus, the selection mainly involves, in the case in which a global library of scene models has been previously formed in a database, identifying the zone in which the scene observed is located and obtaining in the database the scene model associated with said zone.
(46) The scene model considered is thus controlled by a set of parameters v=(v.sub.k), k=1 . . . N.sub.k which describe both the nature of the elements present in the scene portion considered and its atmospheric conditions. Moreover, it should be noted that the scene model considered can optionally vary from one scene portion to another in the case of scene portions of very different natures.
(47) The scene model preferably covers the entire optical range, from 0.4 micrometres to 2.5 micrometres, with a fine spectral resolution, of approximately several nanometres to several tens of nanometres. This then allows to simulate the reflectance of the scene both in the bands of wavelengths of the MS image (hereinafter “MS bands”) and in the band of wavelengths of the PAN image (hereinafter “PAN band”), according to their respective spectral responses. Thus for a set of parameters v, ρ.sub.model-MS(v) and ρ.sub.model-PAN(v) hereinafter designate the simulated reflectances provided by the scene model respectively in the MS bands and in the PAN band.
B) Example of Implementation on Pixels in High Spatial Resolution
(48)
(49) As illustrated by
(50) The step 54 of calculating the MS image with increased spatial resolution further includes a step 542 of correcting the oversampled MS image that is executed for each scene portion considered, that is to say in the present example for each pixel of the oversampled MS image. The correction of the oversampled MS image in a given pixel (scene portion) is carried out according to the injection vector calculated for said pixel, and the respective PAN reflectances of the PAN image and of the PAN image with reduced spatial resolution for said pixel.
(51) B.1) Spatial Oversampling of the MS Image
(52) In the example illustrated by
(53) At the end of the step 540 of spatial oversampling, the number of pixel in the MS image has been increased so that a pixel of the oversampled MS image represents substantially the same surface area of the scene as a pixel of the PAN image. The spatial resolution of the oversampled MS image, despite having the same spatial sampling distance as the PAN image, is not comparable to that of the PAN image, and is still limited by the spatial resolution of initial acquisition of the MS image. The following steps of the method 50 for increasing spatial resolution aim precisely to correct, according to the PAN image, the oversampled MS image to obtain an MS image with increased spatial resolution closer to that which an MS image acquired directly with the same spatial resolution as the PAN image would have been.
(54) B.2) Determining the Optimal MS Values
(55) As illustrated by
(56) It should be noted that during the step 51 of determining optimal MS values, the parameterisation of the scene model is optimised at least with respect to the oversampled MS image. It is possible, however, according to other examples, to also consider the PAN image. In other words, it is possible to optimise the parameterisation of the scene model with respect to both the MS reflectance of the oversampled MS image and the PAN reflectance of the PAN image.
(57) In the rest of the description, in a non-limiting manner, the case is considered in which only the oversampled MS image is taken into account for determining the optimal MS values.
(58) As indicated above, a particular set of values of the parameters of the scene model allows to calculate, in each pixel considered, a simulated MS reflectance, which can be compared to the MS reflectance of the oversampled MS image.
(59) For example, the optimisation aims to maximise the resemblance between the simulated MS reflectance, provided by the scene model, and the MS reflectance of the oversampled MS image, that is to say that it aims to determine the “optimal” values of said parameters that allow to obtain a maximum resemblance, for the pixel considered, between the simulated MS reflectance and the MS reflectance of the oversampled MS image. However, other types of optimisation can be considered and a process of optimisation generally includes the previous definition of a cost function to be optimised, that is to say to be minimised or to be maximised according to the type of cost function. The choice of a particular cost function is merely an alternative embodiment.
(60) As indicated above, the cost function preferably includes a first term, called “reflectance function”, which calculates a resemblance, for the pixel considered, between the simulated MS reflectance and the MS reflectance of the oversampled MS image.
(61) In specific embodiments, the cost function can further include a second term, called “function of a priori knowledge”, which calculates a resemblance, for the pixel considered, between the values of parameters considered and an a priori knowledge of the parameters of the scene model. Such arrangements allow to improve the determination of the optimal MS values, by using any a priori knowledge of the statistical distribution of the parameters of the scene model.
(62) The cost function C used to determine the optimal MS values of the scene model can be expressed in the following form:
C(v,ρ.sub.MS(p))=C.sub.1(ρ.sub.model-MS(p)(v),ρ.sub.MS(p))+C.sub.2(v,v.sub.prior)
an expression in which: C.sub.1 corresponds to the reflectance function, C.sub.2 corresponds to the function of a priori knowledge, optional, ρ.sub.MS(p) corresponds to the MS reflectance for the pixel p considered, provided by the oversampled MS image, ρ.sub.model-MS(p)(v) corresponds to the simulated MS reflectance for the pixel p considered, provided by the scene model for the values of parameters v, v.sub.prior corresponds to the a priori knowledge of the parameters of the scene model.
(63) According to a first example, the resemblance calculated by the reflectance function C.sub.1 corresponds to a quadratic deviation which can be expressed in the following form:
(64)
an expression in which: ρ.sub.MS,j(p) corresponds to the MS reflectance for the pixel p considered, provided by the oversampled MS image for the j-th band of wavelengths out of the N.sub.j bands of wavelengths of the MS bands (if N.sub.j=4, the four bands of wavelengths are for example the bands R, G, B and NIR), ρ.sub.model-MS,j(p)(v) corresponds to the simulated MS reflectance for the pixel p considered, provided by the scene model for the values of parameters v and for the j-th band of wavelengths out of the N.sub.j bands of wavelengths of the MS bands.
(65) According to a second example, the resemblance calculated by the reflectance function C.sub.1 corresponds to a normalised quadratic deviation which can be expressed in the following form:
(66)
an expression in which E((ρ.sub.model-MS,j−ρ.sub.MS,j).sup.2) is an estimation of the level of precision that can be targeted in the adjustment between the scene model and the oversampled MS image. This level of precision that can be targeted is for example estimated while taking into account the radiometric noise, the precision of the radiometric calibration, etc. It is also possible to take into account an intrinsic level of precision of the scene model, which can be a predetermined fixed value.
(67) It should be noted that, in the case in which the PAN image is also used to determine the optimal MS values, the PAN reflectance of said image can be included in the above expressions of the reflectance function C.sub.1 while considering that there are (N.sub.j+1) different bands of wavelengths, and that the reflectance PAN ρ.sub.PAN(p) of the PAN image corresponds to the reflectance measured in the (N.sub.j+1)-th band of wavelengths, which is compared to the simulated PAN reflectance ρ.sub.model-PAN(p)(v) for the pixel p considered, provided by the scene model for the values of parameters v considered.
(68) The function of a priori knowledge C.sub.2, optional, can for example be expressed in the following form:
(69)
an expression in which E(v.sub.k) and σ(v.sub.k) correspond to the a priori knowledge v.sub.prior and respectively correspond to the average and to the standard deviation of the parameter v.sub.k (1≤k≤N.sub.k), for example supposed to be a random variable according to a Gaussian distribution.
(70) In the case of a cost function C as described above, the optimisation corresponds to a minimisation of said cost function, and the optimal MS values {circumflex over (v)}.sub.MS for the pixel considered are those allowing to minimise said cost function:
(71)
(72) In general, any optimisation method can be implemented, and the choice of a particular method is merely an alternative embodiment. In preferred embodiments, the optimisation is carried out by using a Gauss-Newton algorithm.
(73) B.3) Determining the Optimisation Gradient
(74) As illustrated by
(75) The optimisation gradient is representative, near the simulated PAN reflectance provided by the scene model for the optimal MS values, of the variation in the values of the parameters with respect to the variation in PAN reflectance, for a predetermined cost function and for the pixel in high spatial resolution considered. This step 52 of determining an optimisation gradient is executed, in the example illustrated by
(76) In general, any cost function can be implemented and the choice of a cost function is merely an alternative embodiment.
(77) Everything that has been described above in section B.2 with regard to the cost function used to determine the optimal MS values also applies for the cost function to be used to determine the optimisation gradient, with the notable exception that only the PAN band is taken into account to determine the optimisation gradient (the MS bands are not taken into account to determine the optimisation gradient).
(78) As indicated above, the cost function preferably includes a reflectance function that calculates a resemblance between PAN reflectances. To determine the optimisation gradient, a simulated PAN reflectance, provided by the scene model for the values of parameters considered, is compared to a simulated PAN reflectance determined according to the scene model and the optimal MS values.
(79) In specific embodiments, the cost function can further include a function of a priori knowledge which calculates a resemblance, for the pixel considered, between the values of parameters considered and an a priori knowledge of the parameters of the scene model.
(80) By designating by {circumflex over (ρ)}.sub.PAN(p)=ρ.sub.model-PAN(p)({circumflex over (v)}.sub.MS(p)) the simulated PAN reflectance provided by the scene model for the optimal MS values, and by δρ.sub.PAN a predetermined deviation for the PAN reflectance, the cost function C′ used to determine the optimisation gradient can be expressed in the following form:
C′(v,{circumflex over (ρ)}.sub.PAN(p)+δρ.sub.PAN)=C′.sub.1(ρ.sub.model-PAN(p)(v),{circumflex over (ρ)}.sub.PAN(p)+δρ.sub.PAN)+C′.sub.2(v,v.sub.prior)
an expression in which: C′.sub.1 corresponds to the reflectance function, C′.sub.2 corresponds to the function of a priori knowledge, optional, ρ.sub.model-PAN(p)(v) corresponds to the simulated PAN reflectance for the pixel p considered, provided by the scene model for the values of parameters v.
(81) According to a first example, the reflectance function C′.sub.1 can be expressed in the following form:
C′.sub.1(ρ.sub.model-PAN(p)(v),ρ.sub.PAN(p)+δρ.sub.PAN)=(ρ.sub.model-PAN(p)(v)−({circumflex over (ρ)}.sub.PAN(p)+δPAN)).sup.2
(82) According to a second example, the reflectance function C′.sub.1 can be expressed in the following form:
(83)
an expression in which E((ρ.sub.model-PAN−({circumflex over (ρ)}.sub.PAN+δ.sub.PAN)).sup.2) is an estimation of the level of precision that can be targeted in the adjustment between the scene model and the simulated PAN reflectance provided by the scene model for the optimal MS values, which can be estimated as described above in reference to the step 51 of determining the optimal MS values.
(84) The function of a priori knowledge C′.sub.2, optional, can for example be expressed like above in the following form:
(85)
(86) In preferred embodiments, the a priori knowledge of the parameters of the scene model, used during the step 52 of determining the optimisation gradient, is calculated according to the optimal MS values. In other words, the average value E(v.sub.k) and the standard deviation σ(v.sub.k) are estimated according to the optimal MS values. By designating by {circumflex over (v)}.sub.MS,k(p) the optimal MS value for the k-th parameter (1≤k≤N.sub.k) of the scene model, the average value E(v.sub.k) and the standard deviation σ(v.sub.k) are for example estimated as follows for the pixel considered:
(87)
an expression in which: N.sub.p corresponds to the number of pixels of the oversampled MS image, K.sub.σ is a predetermined real number, chosen for example from the interval [1, 5].
(88) The optimisation gradient is determined by optimisation of the cost function, which aims to invert the scene model, near the simulated PAN reflectance provided by the scene model for the optimal MS values. The optimisation gradient can be determined according to any method for optimising a cost function known to a person skilled in the art, and the choice of a particular method is merely an alternative embodiment. In preferred embodiments, the optimisation gradient is determined by minimising the cost function by using a Gauss-Newton algorithm.
(89) In the case of a cost function C′ as described above, the optimisation corresponds to a minimisation of said cost function, and the determination of the optimisation gradient involves for example the determination of optimal PAN values {circumflex over (v)}.sub.PAN according to the following expression:
(90)
(91) In such a case, for the pixel considered, the optimisation gradient near the simulated PAN reflectance provided by the scene model for the optimal MS values, designated by (∂{circumflex over (v)}/∂ρ.sub.PAN).sub.ρ={circumflex over (ρ)}.sub.
(92)
(93) Such an optimisation gradient is thus calculated, in the example considered, in each pixel in high spatial resolution.
(94) B.4) Determining the Injection Vector
(95) As illustrated by
(96) The injection vector for a pixel is, in preferred embodiments, calculated according to the optimisation gradient calculated for this pixel and according to a matrix of variation of the scene model. This variation matrix corresponds to the Jacobian matrix of the scene model at the optimal MS values calculated for the pixel considered, and is representative of the variation in MS reflectance with respect to the variation in the values of the parameters near the optimal MS values. Such a variation matrix is provided directly by the scene model, or can be determined on the basis of the scene model according to known methods.
(97) The injection vector for a pixel p is for example calculated according to the following expression:
(98)
an expression in which: (∂ρ.sub.MS/∂ρ.sub.PAN).sub.ρ=ρ.sub.
B.5) Low-Pass Filtering of the PAN Image
(99) As illustrated by
(100) The low-pass filtering of the PAN image allows to obtain a PAN image with reduced spatial resolution, that is to say an estimation of what the PAN image would have been if it had been acquired with the same spatial resolution as the MS image, then oversampled. Thus, the PAN image with reduced spatial resolution and the oversampled MS image have substantially the same spatial resolution.
(101) The low-pass filtering is thus a spatial filtering of the PAN image. Preferably, the low-pass filter used is a Gaussian convolution filter representative of the effective spatial resolution of the MS bands, according to the Modulation Transfer Function (or MTF) of the optical observation instrument 11 in the MS bands.
(102) B.6) Correcting the Oversampled MS Image
(103) As illustrated by
(104) The step 542 of correction aims to transmit to the oversampled MS image high spatial resolution modulations observed in the PAN band with the PAN image. For this purpose, the step 542 of correction includes for example an estimation of the high spatial resolution modulation of the PAN reflectance, then a conversion of the high spatial resolution modulation of the PAN reflectance into high spatial resolution modulation of the MS reflectance. The high spatial resolution modulation of the MS reflectance can thus be added to the oversampled MS image to obtain the MS image with increased spatial resolution at the same spatial resolution as the PAN image, corresponding to the reality of the measurement and thus close to what it would have been if it had been acquired directly at the spatial resolution of the PAN image.
(105) The high spatial resolution modulation of the PAN reflectance is for example estimated by comparing the PAN image to the PAN image with reduced spatial resolution. By designating by ρ.sub.PAN(p) the PAN reflectance of the pixel p of the PAN image, and by ρ.sub.PAN-low(p) the PAN reflectance of the pixel p of the PAN image with reduced spatial resolution, the high spatial resolution modulation mod.sub.PAN(p) of the PAN reflectance for the pixel p is for example calculated according to the following expression:
mod.sub.PAN(p)=ρ.sub.PAN(p)−ρ.sub.PAN-low(p)
(106) The high spatial resolution modulation of the MS reflectance is for example estimated according to the high spatial resolution modulation of the PAN reflectance and according to the injection vector. The high spatial resolution modulation mod.sub.MS(p) of the MS reflectance for the pixel p is for example calculated according to the following expression:
(107)
(108) By designating by ρ.sub.MS-high(p) the MS reflectance of the pixel p of the oversampled MS image, and by {circumflex over (ρ)}.sub.MS-high(p) the MS reflectance of the pixel p of the MS image with increased spatial resolution obtained after correction of the oversampled MS image, the correction of the oversampled MS image is thus carried out, in each pixel (1≤p≤N.sub.p), according to the following expression:
{circumflex over (ρ)}.sub.MS-high(p)=ρ.sub.MS-high(p)+mod.sub.MS(p)
C) Embodiment on Pixels in Low Spatial Resolution
(109)
(110) As illustrated by
(111) The step 55 of spatial undersampling of the PAN image is optional, and is present, in particular, if the PAN reflectance of the PAN image is used to optimise the scene model during the step 51 of determining the optimal MS values.
(112) The method 50 for increasing spatial resolution then includes the steps 51 of determining the optimal MS values, 52 of determining the optimisation gradient and 53 of determining the injection vector which are executed for each pixel of the MS image. Everything that was described above in sections B.2, B.3 and B.4 also applies, the only difference being that the pixels considered are pixels in low spatial resolution here (at the spatial sampling distance of the MS image). By designating by N.sub.m the number of pixels in the MS image (N.sub.m<N.sub.p), N.sub.m injection vectors are thus calculated at first.
(113) Each injection vector consists of N.sub.j injection coefficients respectively associated with the various bands of wavelengths of the MS bands (for example with the bands R, G, B and NIR). In each band of wavelengths of the MS bands, an elementary injection image, the N.sub.m pixels of which correspond to the N.sub.m injection coefficients calculated for this band of wavelengths, is therefore available. These N.sub.j elementary injection images together form an image of injection vectors.
(114) As illustrated by
(115) As illustrated by
D) Embodiment on Groups of Pixels in High Spatial Resolution
(116)
(117) The method 50 for increasing spatial resolution of
(118) As illustrated by
(119) In general, any classification criterion known to a person skilled in the art can be implemented, and the choice of a particular classification criterion is merely an alternative embodiment.
(120) In particular, it is possible to use a vegetation criterion of the NDVI type (Normalized Differential Vegetation Index) by calculating, in each pixel, the NDVI index according to the expression (ρ.sub.NIR−ρ.sub.R)/(ρ.sub.NIR+ρ.sub.R), an expression in which ρ.sub.NIR and ρ.sub.R correspond to the reflectances measured in the NIR band and the R band, respectively.
(121) Alternatively or in addition, it is possible to use a criterion of average brightness level by calculating, in each pixel, the expression Σ.sub.j=1.sup.N.sup.
(122) It is thus possible to group together the pixels that have close NDVI indices and/or that have close average brightness levels.
(123) At the end of the step 57 of classification, a number N.sub.g of groups of pixels (N.sub.g≥1) is therefore available. In general, an injection vector is thus calculated for each group of pixels, and the injection vector calculated for a group of pixels is used for all the pixels of this group of pixels. Consequently, the steps 51 of determining the optimal MS values, 52 of determining an optimisation gradient and 53 of determining the injection vector are no longer executed for each of the N.sub.p pixels, but for each of the N.sub.g groups of pixels, which allows in principle to greatly reduce the quantity of calculations to be carried out. During the step 51 of determining the optimal MS values for a group of pixels, the parameterisation of the scene model is for example optimised with respect to a reference MS reflectance, representative of the MS reflectances of the various pixels of the group of pixels considered. For example, the reference MS reflectance of a group of pixels corresponds to a median value or to an average value of the MS reflectances of the group of pixels considered. In the case in which the PAN band is also used during the step 51 of determining the optimal MS values, in the same way it is possible to consider a reference PAN reflectance which can be a median value or an average value of the PAN reflectances of said group of pixels considered.
(124) Once an injection vector has been determined for each group of pixels, given that the injection vector determined for a group of pixels is used for all the pixels of this group of pixels, an injection vector is available for each pixel of the oversampled MS image, to be used during the step 542 of correction.
E) Embodiment on Groups of Pixels in Low Spatial Resolution
(125)
(126) The method 50 for increasing spatial resolution of
(127) As illustrated by
(128) As indicated above, the step 55 of spatial undersampling of the PAN image is optional, and is present, in particular, if the PAN reflectance of the PAN image is used to optimise the scene model during the step 51 of determining the optimal MS values, or in the step 57 of classification.
(129) At the end of the step 57 of classification, a number N.sub.g of groups of pixels (N.sub.g≥1), which can vary from one image to another, according to the scene observed, is thus available. Everything that was described above in reference to
(130) Thus, an image of injection vectors at the spatial sampling distance of the MS image, which is oversampled during the step 56 to obtain an injection vector for each pixel of the oversampled MS image, to be used during the step 542 of correction, is available.
(131) In an example of representation of a system for implementing the method 50 (
(132) In alternative embodiments, at least a part of the processing units, of the storage resources and/or of the viewing means are outsourced.
F) Other Alternatives of the Method for Increasing Spatial Resolution
(133) More generally, it should be noted that the embodiments considered above have been described as non-limiting examples, and that other alternatives are therefore possible.
(134) In particular, the method has been described while considering that the increase in spatial resolution aimed to obtain an MS image with increased spatial resolution at the same spatial resolution as the PAN image. Nothing excludes, according to other examples, considering for the MS image with increased spatial resolution a spatial resolution lower than that of the PAN image. For example, in the case of an optical observation instrument of the SPOT 6/SPOT 7 type, the spatial resolution of the MS image is approximately 6 metres, whereas the spatial resolution of the PAN image is approximately 1.5 metres. In such a case, it is also possible to increase the spatial resolution of the MS image to obtain an intermediate spatial resolution, for example of approximately 2.5 metres. If necessary, it is for example possible, in a manner that is in no way limiting, to previously undersample the PAN image to obtain an undersampled PAN image, brought to the desired spatial resolution for the MS image, that is to say 2.5 metres.
(135) The present method 50 for increasing spatial resolution can be executed in an automated manner without the intervention of an operator at any step whatsoever. The present method 50 for increasing spatial resolution can be implemented in a non-limiting manner, according to the operational context, in a ground station 20 for direct reception of satellites images, in an independent software suite dedicated to the processing of satellite or aerial images, or be integrated into a distributed processing chain for image-processing services of the cloud services type. The present method 50 for increasing resolution, according to any one of its embodiments, can thus be executed by a processing system consisting of a processing device as described above, or by a processing system including several processing devices connected to each other.