Method and system for generating high resolution worldview-3 images
10192288 ยท 2019-01-29
Assignee
Inventors
Cpc classification
G06V20/194
PHYSICS
G06V10/454
PHYSICS
International classification
G06T3/40
PHYSICS
Abstract
The present invention presents four fusion approaches, which can be directly applied to Worldview-3 (WV-3) images. Moreover, they can also be applied to other current or future satellite images that have similar characteristics of WV-3. The present invention also presents data processing methods, including image fusion method, anomaly detection method, material classification method, and concentration estimation method that utilize the high-resolution images generated by the mentioned fusion methods. There are four fusion approaches disclosed in the present invention, e.g. Parallel one-step fusion approach; Sequential fusion of various bands; Sequential-Parallel fusion; and Parallel-Sequential fusion.
Claims
1. A system for generating high resolution super-spectral images, comprising: a panchromatic (PAN) band module having an output generating a panchromatic band of 0.31 m resolution; a Visible Near Infrared (VNIR) band module having outputs generating eight VNIR bands of 1.2 m resolution; a Short-Wave Infrared (SWIR) band module having outputs generating eight SWIR bands of 7.5 m resolution; a first Super-Resolution Algorithm Module (SRAM) having an output generating eight pan-sharpened VNIR bands of 0.31 m resolution by combining the outputs of the PAN band module and the VNIR band module; a second Super-Resolution Algorithm Module (SRAM) having an output generating eight pan-sharpened SWIR bands of 0.31 m resolution by combining the outputs of the PAN band module and the SWIR band module; and a merger module having a Hybrid Color Mapping (HCM) algorithm to merge the outputs from the first and second SRAMs in a parallel one-step approach and generate sixteen High-Resolution (HR) bands of 0.31 m resolution.
2. A system for generating high resolution super-spectral images in accordance to claim 1, further comprising: an anomaly detection module for detecting high spatial resolution images regions that are different from the neighbors; and a sparsity based classification module for classification of surface materials.
3. A method for generating high resolution super-spectral images comprising the steps of: generating a Panchromatic (PAN) band of 0.31 m resolution; generating eight Visible Near Infrared (VNIR) bands of 1.2 m resolution; generating eight Short-Wave Infrared (SWIR) bands of 7.5 m resolution; generating eight pan-sharpened VNIR bands of 0.31 m resolution by combining the PAN band and the eight VNIR bands; generating eight pan-sharpened SWIR bands of 0.31 m resolution by combining the PAN band and the eight SWIR bands; and merging, in a parallel one-step approach, the eight pan-sharpened VNIR bands and the eight pan-sharpened SWIR bands to generate sixteen High-Resolution (HR) bands of 0.31 m resolution.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
DETAILED DESCRIPTION OF THE INVENTION
(14) The present invention presents four fusion approaches, which can be directly applied to Worldview-3 images. Moreover, they can also be applied to other current or future satellite images that have similar characteristics of Worldview-3. The present invention also presents data processing methods, including image fusion method, anomaly detection method, material classification method, and concentration estimation method that utilize the high-resolution images generated by the fusion methods.
(15) Approach 1: Parallel One-Step Fusion
(16) As shown in
(17) Approach 2: Sequential Fusion
(18)
(19) Approach 3: Sequential-Parallel Fusion
(20)
(21) Approach 4: Parallel-Sequential Fusion
(22)
(23) Color Mapping
(24) The idea of color mapping is as the name suggests: mapping a multispectral pixel to a hyperspectral pixel. Here, multispectral images encompass color R-G-B images. This mapping is based on a transformation matrix T, i.e.
X=Tx,
where X is one (or more) hyperspectral pixels and x is one (or more) multispectral pixels. To get the transformation matrix, the present invention simulates a low resolution multispectral image and use the low resolution hyperspectral image to train the T.
(25) Training is done by minimizing the mean square error:
(26)
where H is the set of hyperspectral pixels and C is the set of multi-spectral pixels. With enough pixels, the optimal T can be determined with:
T=XC.sup.T(CC.sup.T).sup.1.
Hybrid Color Mapping
(27) The present invention proposes a fusion algorithm, known as Hybrid Color Mapping (HCM) to perform the fusion in all four fusion approaches mentioned above. HCM is simple to implement, efficient, parallelizable and fast. The details can be found in the mentioned pending patents and papers by the present inventor. For completeness, the HCM algorithm is included in the following few paragraphs.
(28) Extensive studies and results show that the method used in the present invention can generate highly accurate, high resolution reconstruction than the normal simple bicubic scaling and other state-of-the-art methods. In addition, the present invention conducted extensive classification study using reconstructed images. Results show that the method used in the present invention performs much better than other methods.
(29) For many hyperspectral images, the band wavelengths range from 0.4 to 2.5 um. For color/multispectral images, the bands may include R-G-B, and some additional spectral bands. As shown in
(30) Local Color Mapping
(31) The present invention further enhances the method by applying color mapping patch by patch as shown in
(32) Experiment
(33) The present invention used the NASA Airborne Visible/Infrared Imaging Spectrometer (AVIRIS) hyperspectral data in this study. In each experiment, the image was downscaled by three times using Bicubic Interpolation (BI) method. The downscaled image was used as low resolution hyperspectral image. The R-G-B bands were picked from the original high resolution hyperspectral image for color mapping. The bicubic method in the following plots was implemented by upscaling the low-resolution image using BI. The results of the bicubic method were used as a baseline for comparison study. As shown in
(34) Material Classification Algorithm
(35) The present invention proposes to apply the latest development in sparsity based classification algorithm to rock type classification. Similarly, the approach of the present invention requires some spectral signatures to be available as in other methods mentioned.
(36) The present invention implemented a sparsity-driven recognition method in the articles and papers mentioned. In the sparsity-driven face recognition approach, the assumption is that a face image of subject i lies in the linear span of the existing face images for that same subject i in the training set. Suppose {v.sub.i1, v.sub.i2, . . . , v.sub.iD} are the vectorized D face images of subject i in the training set, and y is a new vectorized face image of subject i, which is not in the training set. Based on this assumption, y, can be expressed as:
(37)
(38) Suppose there are C human subjects; the above expression can then be expanded as in (2) and this expression indicates that y is the sparse linear combination of face images in the training set.
(39)
(40) The sparse representation, x.sub.0=[0 . . . 0 .sub.i.sup.T0 . . . 0], thus yields the membership of y to subject i. The above framework to small contact detection can be easily extended. Each contact image will be vectorized and put into the dictionary.
(41) Referring to
(42) As shown in
(43) As shown in
(44) Concentration Estimation Algorithm
(45) The present invention proposes to apply Deep Neural Network (DNN) techniques to further improve the chemical element classification and composition estimation performance in surface monitoring such as volcano monitoring. Possible applications include ash detection, composition estimation, and SO.sub.2 concentration estimation. The present invention adapts two of the DNN techniques, the Deep Belief Network (DBN) and Convolutional Neural Network (CNN), respectively, to the element classification and chemical composition estimation problem.
(46) DNN techniques have the following advantages: i. Better capture of hierarchical feature representations; ii. Ability to learn more complex behaviors; iii. Better performance than conventional methods; iv. Use distributed representations to learn the interactions of many different factors on different levels; v. Can learn from unlabeled data such as using the Restricted Boltzmann Machines (RBM) pretraining method; and vi. Performance can scale up with the number of hidden layers and hidden nodes on fast GPUs.
(47) One of the applications in which DNN techniques have proved themselves is the handwritten digit recognition application. The present invention applied the Deep Belief Network (DBN) technique to the Laser Induced Breakdown Spectroscopy (LIBS) spectrum database (sixty-six samples) based on preliminary investigation in the past. The total number of oxides is nine and these nine oxide compounds are: 1) SiO.sub.2, 2) TiO.sub.2; 3) Al.sub.2O.sub.3; 4) Fe.sub.2O.sub.3; 5) MnO; 6) MgO; 7) CaO; 8) Na.sub.2O; 9) K.sub.2O.
(48) A Leave-One-Out (LOO) testing framework is applied to the LIES dataset of sixty-six samples to estimate oxide compositions. Two performance measures are computed: a) ERRORSUM, the sum of absolute error in the sample estimate and its ground truth, b) RMSEP, to assess the estimation accuracy for each of the nine oxide compounds. The initial results were quite encouraging for a DBN with 3-Level architecture. Level-1: RBM with 50 hidden units; Level-2: RBM with 5050 hidden units; and Level-3: connection to output with NN with 1000 epochs. Comparable results for DBN to the Partial Least Square (PLS) technique were observed. The resultant performance measures with PLS and DBN technique is shown in
(49) It will be apparent to those skilled in the art that various modifications and variations can be made to the system and method of the present disclosure without departing from the scope or spirit of the disclosure. It should be perceived that the illustrated embodiments are only preferred examples of describing the invention and should not be taken as limiting the scope of the invention.