Method and System for Generating High Resolution Worldview-3 Images
20180182068 ยท 2018-06-28
Inventors
Cpc classification
G06V20/194
PHYSICS
G06V10/454
PHYSICS
International classification
G06T3/40
PHYSICS
Abstract
The present invention presents four fusion approaches, which can be directly applied to Worldview-3 (WV-3) images. Moreover, they can also be applied to other current or future satellite images that have similar characteristics of WV-3. The present invention also presents data processing methods, including image fusion method, anomaly detection method, material classification method, and concentration estimation method that utilize the high-resolution images generated by the mentioned fusion methods. There are four fusion approaches disclosed in the present invention, e.g. Parallel one-step fusion approach; Sequential fusion of various bands; Sequential-Parallel fusion; and Parallel-Sequential fusion.
Claims
1. A system for generating high resolution super-spectral images, comprising: a panchromatic (PAN) module; a Visible Near Infrared (VNIR) module; a Short-Wave Infrared (SWIR) module; a first Super-Resolution Algorithm Module (SRAM); a second SRAM; and a merger having a Hybrid Color Mapping (HCM) module to process the outputs from the SRAMs.
2. A system for generating high resolution super-spectral images in accordance to claim 1, further comprising: an anomaly detection module for detecting high spatial resolution images regions that are different from the neighbors; and a sparsity based classification module for classification of surface materials.
3. A system for generating high resolution super-spectral images in accordance to claim 1, wherein: the PAN module generates a panchromatic band of 0.31 m resolution; the VNIR module generates eight VNIR bands of 1.2 m resolution; and the SWIR module generates eight SWIR bands of 7.5 m resolution.
4. A system for generating high resolution super-spectral images in accordance to claim 3, wherein: the first SRAM combines the panchromatic band and the eight VNIR bands to generate eight pan-sharpened VNIR bands; the second SRAM combines the panchromatic band and the eight SWIR bands to generate eight pan-sharpened SWIR bands; and the merger combines the eight pan-sharpened VNIR bands and the eight pan-sharpened SWIR bands to generate sixteen High-Resolution (HR) bands.
5. A system for generating high resolution super-spectral images in accordance to claim 4, wherein: the two SRAMs are executed in a parallel manner.
6. A system for generating high resolution super-spectral images in accordance to claim 4, wherein: the eight pan-sharpened VNIR bands are of 0.31 m resolution; the eight pan-sharpened SWIR bands are of 0.31 m resolution; and the sixteen HR bands are of 0.31 m resolution.
7. A system for generating high resolution super-spectral images in accordance to claim 3, wherein: the first SRAM combines the panchromatic band and the eight VNIR bands to generate eight pan-sharpened VNIR bands; the second SRAM combines the eight pan-sharpened VNIR bands and the eight SWIR bands to generate eight pan-sharpened SWIR bands; and the merger combines the eight pan-sharpened VNIR bands and the eight pan-sharpened SWIR bands to generate sixteen High-Resolution (HR) bands.
8. A system for generating high resolution super-spectral images in accordance to claim 7, wherein: the two SRAMs are executed in a sequential manner.
9. A system for generating high resolution super-spectral images in accordance to claim 3, further comprising: a third SRAM for receiving the eight VNIR bands and the eight SWIR bands from the VNIR and SWIR modules to generate eight pan-sharpened SWIR bands of 1.2 m resolution.
10. A system for generating high resolution super-spectral images in accordance to claim 9, wherein: the first SRAM combines the panchromatic band and the eight VNIR bands to generate eight pan-sharpened VNIR bands; the second SRAM combines the panchromatic band and the eight pan-sharpened SWIR bands from the third SRAM to generate eight pan-sharpened SWIR bands of 0.31 m resolution; and the merger combines the eight pan-sharpened VNIR bands from the first SRAM and the eight pan-sharpened SWIR bands from the second SRAM to generate sixteen HR bands.
11. A system for generating high resolution super-spectral images in accordance to claim 10, wherein: the three SRAMs are executed in a sequential-parallel manner.
12. A system for generating high resolution super-spectral images in accordance to claim 9, wherein: the first SRAM combines the panchromatic band and the eight VNIR bands to generate eight pan-sharpened VNIR bands; the second SRAM combines the eight pan-sharpened VNIR bands from the first SRAM and the eight pan-sharpened SWIR bands from the third SRAM to generate eight pan-sharpened SWIR bands of 0.31 m resolution; and the merger combines the eight pan-sharpened VNIR bands from the first SRAM and the eight pan-sharpened SWIR bands from the second SRAM to generate sixteen High-Resolution (HR) bands.
13. A system for generating high resolution super-spectral images in accordance to claim 12, wherein: the three SRAMs are executed in a parallel-sequential manner.
14. A method for generating high resolution super-spectral images, comprising the steps of: a. generating a panchromatic (PAN) band; b. generating eight Visible Near Infrared (VNIR) bands; c. generating eight Short-Wave Infrared (SWIR) bands; d. generating eight pan-sharpened VNIR bands and eight pan-sharpened SWIR bands from the PAN band, VNIR bands, and the SWIR bands; and e. merging the eight pan-sharpened VNIR bands and the eight pan-sharpened SWIR bands to generate sixteen HR bands.
15. A method for generating high resolution super-spectral images in accordance to claim 14, wherein: the eight pan-sharpened VNIR bands are generated by combining the PAN band and the eight VNIR bands; and the eight pan-sharpened SWIR bands are generated by combining the PAN band and the eight SWIR bands.
16. A method for generating high resolution super-spectral images in accordance to claim 14, wherein: the eight pan-sharpened VNIR bands are generated by combining the PAN band and the eight VNIR bands; and the eight pan-sharpened SWIR bands are generated by combining the eight pan-sharpened VNIR bands and the eight SWIR bands.
17. A method for generating high resolution super-spectral images, comprising the steps of: a. generating a panchromatic (PAN) band; b. generating eight Visible Near Infrared (VNIR) bands; c. generating eight Short-Wave Infrared (SWIR) bands; d. combining the Pan band and the eight VNIR bands to generate eight pan-sharpened VNIR bands; and e. combining the eight VNIR bands and the eight SWIR bands to generate eight pan-sharpened SWIR bands; f. combining the PAN band and the eight pan-sharpened SWIR bands to generate eight enhanced SWIR bands of higher resolution; and g. merging the eight pan-sharpened VNIR bands and the enhanced SWIR bands of higher resolution to generate sixteen High-Resolution HR bands.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
DETAILED DESCRIPTION OF THE INVENTION
[0035] The present invention presents four fusion approaches, which can be directly applied to Worldview-3 images. Moreover, they can also be applied to other current or future satellite images that have similar characteristics of Worldview-3. The present invention also presents data processing methods, including image fusion method, anomaly detection method, material classification method, and concentration estimation method that utilize the high-resolution images generated by the fusion methods.
Approach 1: Parallel One-Step Fusion
[0036] As shown in
Approach 2: Sequential Fusion
[0037]
Approach 3: Sequential-Parallel Fusion
[0038]
Approach 4: Parallel-Sequential Fusion
[0039]
Color Mapping
[0040] The idea of color mapping is as the name suggests: mapping a multispectral pixel to a hyperspectral pixel. Here, multispectral images encompass color R-G-B images. This mapping is based on a transformation matrix T, i.e.
X=Tx,
where X is one (or more) hyperspectral pixels and x is one (or more) multispectral pixels. To get the transformation matrix, the present invention simulates a low resolution multispectral image and use the low resolution hyperspectral image to train the T.
[0041] Training is done by minimizing the mean square error:
where H is the set of hyperspectral pixels and C is the set of multi-spectral pixels. With enough pixels, the optimal T can be determined with:
T=XC.sup.T(CC.sup.T).sup.1.
Hybrid Color Mapping
[0042] The present invention proposes a fusion algorithm, known as Hybrid Color Mapping (HCM) to perform the fusion in all four fusion approaches mentioned above. HCM is simple to implement, efficient, parallelizable and fast. The details can be found in the mentioned pending patents and papers by the present inventor. For completeness, the HCM algorithm is included in the following few paragraphs.
[0043] Extensive studies and results show that the method used in the present invention can generate highly accurate, high resolution reconstruction than the normal simple bicubic scaling and other state-of-the-art methods. In addition, the present invention conducted extensive classification study using reconstructed images. Results show that the method used in the present invention performs much better than other methods.
[0044] For many hyperspectral images, the band wavelengths range from 0.4 to 2.5 um. For color/multispectral images, the bands may include R-G-B, and some additional spectral bands. As shown in
Local Color Mapping
[0045] The present invention further enhances the method by applying color mapping patch by patch as shown in
Experiment
[0046] The present invention used the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral data in this study. In each experiment, the image was downscaled by three times using Bicubic Interpolation (BI) method. The downscaled image was used as low resolution hyperspectral image. The R-G-B bands were picked from the original high resolution hyperspectral image for color mapping. The bicubic method in the following plots was implemented by upscaling the low-resolution image using BI. The results of the bicubic method were used as a baseline for comparison study. As shown in
Material Classification Algorithm
[0047] The present invention proposes to apply the latest development in sparsity based classification algorithm to rock type classification. Similarly, the approach of the present invention requires some spectral signatures to be available as in other methods mentioned.
[0048] The present invention implemented a sparsity-driven recognition method in the articles and papers mentioned. In the sparsity-driven face recognition approach, the assumption is that a face image of subject i lies in the linear span of the existing face images for that same subject i in the training set. Suppose {v.sub.i1, v.sub.i2, . . . , v.sub.iD} are the vectorized D face images of subject i in the training set, and y is a new vectorized face image of subject i, which is not in the training set. Based on this assumption, y, can be expressed as:
[0049] Suppose there are C human subjects; the above expression can then be expanded as in (2) and this expression indicates that y is the sparse linear combination of face images in the training set.
[0050] The sparse representation, x.sub.0=[0 . . . 0 a.sub.i.sup.T 0 . . . 0], thus yields the membership of y to subject i. The above framework to small contact detection can be easily extended. Each contact image will be vectorized and put into the dictionary.
[0051] Referring to
[0052] As shown in
[0053] As shown in
Concentration Estimation Algorithm
[0054] The present invention proposes to apply Deep Neural Network (DNN) techniques to further improve the chemical element classification and composition estimation performance in surface monitoring such as volcano monitoring. Possible applications include ash detection, composition estimation, and SO.sub.2 concentration estimation. The present invention adapts two of the DNN techniques, the Deep Belief Network (DBN) and Convolutional Neural Network (CNN), respectively, to the element classification and chemical composition estimation problem.
[0055] As shown in the website, https://github.com/rasmusbergpalm/DeepLearnToolbox, DNN techniques have the following advantages: [0056] i. Better capture of hierarchical feature representations; [0057] ii. Ability to learn more complex behaviors; [0058] iii. Better performance than conventional methods; [0059] iv. Use distributed representations to learn the interactions of many different factors on different levels; [0060] v. Can learn from unlabeled data such as using the Restricted Boltzmann Machines (RBM) pretraining method; and [0061] vi. Performance can scale up with the number of hidden layers and hidden nodes on fast GPUs.
[0062] One of the applications in which DNN techniques have proved themselves is the handwritten digit recognition application. The present invention applied the Deep Belief Network (DBN) technique to the Laser Induced Breakdown Spectroscopy (LIBS) spectrum database (sixty-six samples) based on preliminary investigation in the past. The total number of oxides is nine and these nine oxide compounds are: [0063] 1) SiO.sub.2, [0064] 2) TiO.sub.2; [0065] 3) Al.sub.2O.sub.3; [0066] 4) Fe.sub.2O.sub.3; [0067] 5) MnO; [0068] 6) MgO; [0069] 7) CaO; [0070] 8) Na.sub.2O; [0071] 9) K.sub.2O.
[0072] A Leave-One-Out (LOO) testing framework is applied to the LIES dataset of sixty-six samples to estimate oxide compositions. Two performance measures are computed: a) ERRORSUM, the sum of absolute error in the sample estimate and its ground truth, b) RMSEP, to assess the estimation accuracy for each of the nine oxide compounds. The initial results were quite encouraging for a DBN with 3-Level architecture. Level-1: RBM with 50 hidden units; Level-2: RBM with 5050 hidden units; and Level-3: connection to output with NN with 1000 epochs. Comparable results for DBN to the Partial Least Square (PLS) technique were observed. The resultant performance measures with PLS and DBN technique is shown in
[0073] It will be apparent to those skilled in the art that various modifications and variations can be made to the system and method of the present disclosure without departing from the scope or spirit of the disclosure. It should be perceived that the illustrated embodiments are only preferred examples of describing the invention and should not be taken as limiting the scope of the invention.