Gabor cube feature selection-based classification method and system for hyperspectral remote sensing images

10783371 ยท 2020-09-22

Assignee

Inventors

Cpc classification

International classification

Abstract

The present invention provides a Gabor cube feature selection-based classification method for hyperspectral remote sensing images, comprising the following steps: generating three-dimensional Gabor filters according to set frequency and direction parameter values; convoluting hyperspectral remote sensing images with the three-dimensional Gabor filters to obtain three-dimensional Gabor features; selecting three-dimensional Gabor features, classification contribution degrees to various classes of which meet preset requirements, from the three-dimensional Gabor features; and classifying the hyperspectral remote sensing images by a multi-task joint sparse representation-based classification means by using the selected three-dimensional Gabor features. The present invention is based on the three-dimensional Gabor features, and the used three-dimensional Gabor features contain rich local change information of a signal and are competent in feature characterizing. Using a Fisher discriminant criterion not only makes full use of high-level semantics hidden among the features, but also eliminates redundant information and reduces the classification time complexity.

Claims

1. A Gabor cube feature selection-based classification method for hyperspectral remote sensing images, the classification method for hyperspectral remote sensing images comprising the following steps: A, generating a number of three-dimensional Gabor filters according to set frequency and direction parameter values; B, convoluting hyperspectral remote sensing images with the generated three-dimensional Gabor filters to obtain a number of three-dimensional Gabor features; C, selecting a number of three-dimensional Gabor features, classification contribution degrees to various classes of which meet preset requirements, from the obtained three-dimensional Gabor features; and D, classifying the hyperspectral remote sensing images by a multi-task joint sparse representation-based classification means by using the selected three-dimensional Gabor features; wherein R represents the hyperspectral remote sensing images, 52 three-dimensional Gabor features obtained by the hyperspectral remote sensing images R participating in convolution operation are represented by {M.sub.t, t=1, . . . , 52}, wherein M.sub.1 represents a t.sup.th three-dimensional Gabor feature; the step C of selecting three-dimensional Gabor features by utilizing a Fisher discriminant criterion, particularly comprising: C1, dividing each of the three-dimensional Gabor feature into a training set and a test set; A.sup.t represents a training set of the t.sup.th three-dimensional Gabor feature M.sub.t, G.sup.t represents a test set of t.sup.th three-dimensional Gabor feature M.sub.t, t {1, . . . , 52}, and therefore all training sets {A.sup.t, t=1, . . . , 52} and test sets {G.sup.t, t=1, . . . , 52} of the 52 three-dimensional Gabor features are obtained; step C2, respectively calculating an intra-class spacing and an inter-class spacing of each three-dimensional Gabor feature by using the training set {A.sup.t, t=1, . . . , 52}; .sub.W(A.sub.p.sup.t) represents an intra-class spacing of a p.sup.t class in the t.sup.th three-dimensional Gabor feature, and .sub.B(A.sub.p.sup.t) represents an inter-class spacing of the p.sup.th class in the t.sup.h three-dimensional Gabor feature, then: W ( A p t ) = .Math. = A p t ( i - m p t ) T ( i - m p t ) B ( A p t ) = n p ( m p t - m t ) T ( m p t - m t ) ; wherein A.sub.p.sup.t represents a training set of the p.sup.th class of the training sets A.sup.t, A.sup.t=[A.sub.1.sup.t, . . . , A.sub.p.sup.t], .sub.1 represents one training sample in A.sub.p.sup.t: n.sub.p represents the number of the samples in the p.sup.th class p {1, . . . , P} represents the number of the classes; m.sup.t and m.sub.p.sup.trespectively represent mean vectors m t = 1 n .Math. i A t i and m p t = 1 n p .Math. i A p t i of A t and A p t ; step C3, calculating Fisher scores of the classes in the 52 three-dimensional Gabor features by using the intra-class spacing and the inter-class spacing calculated in the step C2: the Fisher scores of the classes in the 52 three-dimensional Gabor features are stored by using a matrix F (Fcustom character.sup.52P, represents a Fisher score of the p.sup.th class in the t.sup.th three-dimensional Gabor feature. then: F p T = B ( A p t ) / W ( A p t ) ; wherein Fcustom character.sup.52P, P represents the number of the classes, F.sub.p.sup.t represents elements in a t.sup.th and a p.sup.th column of the matrix F; step C4, selecting V elements with higher scores from each column of the matrix F, and storing indexes of the selected elements in a preliminary index matrix, removing repeated indexes in the preliminary index matrix to obtain a final index matrix, and determining three-dimensional Gabor features to be selected according to the index matrix; S represents the preliminary index matrix, I represents the final index matrix, the V elements with higher Fisher scores are selected from each column of the matrix F, and the index of the selected elements are stored in the preliminary index matrix S, Scustom character.sup.VP; the repeated indexes in the preliminary index matrix S are removed to obtain finally-selected indexes, the finally-selected indexes are stored in the index martix I, I custom character.sup.K1; and no repeated indexes exist in the indes matrix I, and eah index points to a specific three-dimensional Gabor feature.

2. The classification method for hyperspectral remote sensing images according to claim 1, wherein the step A particularly comprises: step A1, generating 64 three-dimensional Gabor filters according to the set frequency and direction parameter values; f.sub.t represents a frequency, {.sub.k,.sub.j} represents a direction, then the frequency is set as follows: f.sub.t=[0.5,0.25,0.125,0.0625], 16 directions are set as follows: {.sub.k,.sub.j}=[0,/4,/2,3/4], and then the 64 three-dimensional Gabor filters are generated according to a formula f , ( x , y , ) = 1 ( 2 ) 2 / 3 3 exp ( j 2 ( xu + yv + w ) ) exp ( - x 2 + y 2 + 2 2 2 ) , wherein u=f sin cos , v=f sin sin , w=f cos ; represents an included angle between the filter and a axis, represents an included angle between the filter and a plane; .sub.k represents a k.sup.th value of the , .sub.j represents a j.sup.th value of the ; (x, y, ) respectively represents an x coordinate, a y coordinate and a spectrum coordinate of a pixel; represents a width of a Gaussian envelope; and step A2, removing 8 repeated three-dimensional Gabor filters in the 64 three-dimensional Gabor filters, wherein the remaining 52 three-dimensional Gabor filters serve as three-dimensional Gabor filters participating in subsequent convolution operation and are represented by {.sub.i, i=1, . . . , 52}.

3. The classification method for hyperspectral remote sensing images according to claim 1, wherein there are 52 generated three-dimensional Gabor filters, which are represented by {.sub.i, i=1, . . . , 52}; and the step B particularly comprises: step B1, convoluting hyperspectral remote sensing images with the 52 three-dimensional Gabor filters; and step B2, calculating amplitude from a result of convolution operation to obtain 52 three-dimensional Gabor features of the hyperspectral remote sensing images; R represents the hyperspectral remote sensing images, and M.sub.t represents a generated t.sup.th three-dimensional Gabor feature, then the calculating amplitude is represented as M.sub.t(x,y,)=|(R.Math..sub.t)(x,y,)|; the generated 52 three-dimensional Gabor features are represented as {M.sub.t, t=1, . . . , 52}, wherein (x,y,) respectively represents an x coordinate, a y coordinate and a spectrum coordinate.

4. The classification method for hyperspectral remote sensing images according to claim 1, wherein indexes of K three-dimensional Gabor features selected in the step C are stored in the index matrix I; and the step D particularly comprises: step D1, dividing each selected three-dimensional Gabor feature into a training set and a test set; {A.sup.I(k), k=1, . . . , K} represents training sets corresponding to the K three-dimensional Gabor features, and {G.sup.I(k), k=1, . . . , K} represents corresponding test sets; step D2, calculating sparse coding coefficients of three-dimensional Gabor features of test samples on the training sets; g represents one test sample in the test sets, and the K three-dimensional Gabor features corresponding to it are represented as {g.sup.I(k), k=1, . . . , K}, and if the sparse coding coefficients of the K three-dimensional Gabor features {g.sup.I(k), k=1, . . . , K} of the test sample on the training sets {A.sup.k, k=1, . . . , K} are respectively represented as {.sup.k, k=1, . . . , K}, then: { k } = arg min k = { 1 , .Math. K } { .Math. g I ( k ) - A I ( k ) k .Math. 2 2 + .Math. k .Math. 1 } ; wherein .sup.k represents any one vector which is identical to a dimension of .sup.k; and represents a coefficient of a L.sub.1 norm; step D3, respectively reconstructing {g.sup.I(k), k=1, . . . , K} according to the preliminary index matrix S by using the sparse coding coefficients {.sup.k, k=1, . . . , K} and the training sets {A.sup.I(k), k=1, . . . , K} to obtain K reconstruction errors and accumulating them, wherein a class with minimal accumulated reconstruction errors serves as the class of the test sample g; and custom character represents a class predicted for the test sample g, then: = arg min p { 1 , .Math. , P } .Math. i = S ( 1 , p ) S ( V , p ) .Math. g i - A p i p i .Math. 2 2 ; wherein .sub.p.sup.k represents a coefficient corresponding to the p.sup.th class in the sparse coding coefficients .sup.k corresponding to the selected k.sup.th three-dimensional Gabor feature g.sup.k; and the index matrix I is obtained by removing the repeated indexes from the preliminary index matrix S, then i{I(k), k=1, . . . , K}.

5. A Gabor cube feature selection-based classification system for hyperspectral remote sensing images, wherein the classification system for hyperspectral remote sensing images comprises: a generation unit, which is configured to generate a number of three-dimensional Gabor filters according to set frequency and direction parameter values; a calculation unit, which is configured to convolute hyperspectral remote sensing images with the generated three-dimensional Gabor filters to obtain a number of three-dimensional Gabor features; a selection unit, which is configured to select a number of three-dimensional Gabor features, classification contribution degrees to various classes of which meet preset requirements, from the obtained three-dimensional Gabor features; and a classification unit, which is configured to classify the hyperspectral remote sensing images by a multi-task joint sparse representation-based classification means by using the selected three-dimensional Gabor features; wherein the generation unit is particulary configured to: firstly, set frequency and direction parameter values, and then generate 64 three-dimensional Gabor filters according to the set values; f.sub.t represents a frequency, {.sub.k, .sub.j} represents a direction, then the frequency is set as follows: f.sub.t=[0.5, 0.25, 0.125, 0.0625], 16 directions are set as follows: {.sub.k, .sub.j}=[0, /4, /2,3/4], and the 64 three-dimensional Gabor filters are generated according to a formula f , , ( x , y , ) = 1 ( 2 ) 2 / 3 3 exp ( j 2 ( x u + y v - w ) ) exp ( - x 2 + y 2 + 2 2 2 ) , wherein u=f sin cos , v=f sin sin , w=f cos ; represents an included angle between a filter and a axis, represents an included angle between the filter and a uv plane; .sub.k represents a k.sup.th value of the , .sub.j represents a j.sup.th value of the ; (x,y,) respectively represents an x coordinate, a y coordinate and a spectrum coordinate of a pixel; represents a width of a Gaussian envelope; and finally, remove 8 repeated three-dimensional Gabor filters in the 64 three-dimensional Gabor filters, wherein the remaining 52 three-dimensional Gabor filters serve as three-dimensional Gabor filters participating in subsequent convolution operation and are represented by {.sub.t,i=1, . . . , 52}.

6. The classification system for hyperspectral remote sensing images according to claim 5, wherein there are 52 three-dimensional Gabor filters, which are represented by {.sub.i, i=1, . . . , 52}; the calculation unit is particularly configured to: firstly, convolute hyperspectral remote sensing images with the generated 52 three-dimensional Gabor filters; and then calculate amplitude from a result of convolution operation to obtain 52 three-dimensional Gabor features of the hyperspectral remote sensing images; R represents the hyperspectral remote sensing images, and M.sub.t represents the generated t.sup.th three-dimensional Gabor feature, then: the calculating amplitude is represented as M.sub.t(x,y,)=|(R.Math..sub.t)(x,y,)|; the generated 52 three-dimensional Gabor features are represented as {M.sub.t, t=1, . . . , 52}, wherein (x,y,) respectively represents an x coordinate, a y coordinate and a spectrum coordinate.

7. The classification system for hyperspectral remote sensing images according to claim 5, wherein R represents the hyperspectral remote sensing images, 52 three-dimensional Gabor features obtained by the hyperspectral remote sensing images R participating in convolution operation are represented by {M.sub.t, t=1, . . . , 52}, wherein M.sub.t represents a t.sup.th three-dimensional Gabor feature; the selection unit selects three-dimensional Gabor features by utilizing a Fisher discriminant criterion, with particular steps as follows: firstly, dividing each of the three-dimensional Gabor feature into a training set and a test set; A.sup.t represents a training set of the t.sup.th three-dimensional Gabor feature M.sub.t, G.sup.t represents a test set of the t.sup.th three-dimensional Gabor feature M.sub.t, t{1, . . . , 52}, and therefore all training sets {A.sup.t, t=1, . . . , 52} and test sets {G.sup.t, t=1, . . . , 52} of the 52 three-dimensional Gabor features are obtained; then, respectively calculating an intra-class spacing and an inter-class spacing of each three-dimensional Gabor feature by using the training set {A.sup.t, t=1, . . . , 52}; .sub.W(A.sub.p.sup.t) represents an intra-class spacing of a p.sup.th class in the t.sup.th three-dimensional Gabor feature, and .sub.B(A.sub.p.sup.t) represents an inter-class spacing of the p.sup.th class in the t.sup.th three-dimensional Gabor feature, then: W ( A p t ) = .Math. i A p t ( i - m p t ) T ( i - m p t ) B ( A p t ) = n p ( m p t - m t ) T ( m p t - m t ) ; wherein A.sub.p.sup.t represents a training set of the p.sup.th class of the training sets A.sup.t, A.sup.t=[A.sub.1.sup.t, . . . , A.sub.p.sup.t], .sub.i represents one training sample in A.sub.p.sup.t; n.sub.p represents the number of the samples in the p.sup.th class, p{1, . . . , P}, P represents the number of the classes; m.sup.t and m.sub.p.sup.t respectively represent mean vectors m t = 1 n .Math. i A t i and m p t = 1 n p .Math. i A p t i of A.sup.t and A.sub.p.sup.t; then, calculating Fisher scores of the classes in the 52 three-dimensional Gabor features by using the calculated intra-class spacing and the calculated inter-class spacing; the Fisher scores of the classes in the 52 three-dimensional Gabor features are stored by using a matrix F (F custom character.sup.52P), F.sub.p.sup.t represents a Fisher score of the p.sup.th class in the t.sup.th three-dimensional Gabor feature, then:
F.sub.p.sup.t=.sub.B(A.sub.p.sup.t)/.sub.W(A.sub.p.sup.t); wherein F custom character.sup.52P, P represents the number of the classes, and F.sub.p.sup.t represents elements in a t.sup.th line and a p.sup.th column of the matrix F; finally, selecting V elements with higher scores from each column of the matrix F, and storing indexes of the selected elements in a preliminary index matrix, removing repeated indexes in the preliminary index matrix to obtain a final index matrix, and determining three-dimensional Gabor features to be selected according to the index matrix; S represents the preliminary index matrix, I represents a final index matrix, the V elements with higher Fisher scores are selected from each column of the matrix F, and the index of the selected elements are stored in the preliminary index matrix S, Scustom character.sup.VP; the repeated indexes in the preliminary index matrix S are removed to obtain finally-selected indexes, the finally-selected indexes are stored in the index matrix I, Icustom character.sup.K1; and no repeated indexes exist in the index matrix I, and each index points to a specific three-dimensional Gabor feature.

8. The classification system for hyperspectral remote sensing images according to claim 7, wherein the classification unit is particularly configured to: firstly, divide each selected three-dimensional Gabor feature into a training set and a test set; {A.sup.I(k), k=1, . . . , K} represents training sets corresponding to the K three-dimensional Gabor features, and {G.sup.I(k), k=1, . . . , K} represents corresponding test sets; then, calculate sparse coding coefficients of three-dimensional Gabor features of test samples on the training sets; g represents one test sample, and the K three-dimensional Gabor features corresponding to it are represented as {g.sup.I(k), k=1, . . . , K}, and if the sparse coding coefficients of the K three-dimensional Gabor features {g.sup.I(k), k=1, . . . , K} of the test sample on the training sets {A.sup.k, k=1, . . . , K} are respectively represented as {.sup.k, k=1, . . . , K}, then: { k } = arg min k = { 1 , .Math. K } { .Math. g I ( k ) - A I ( k ) k .Math. 2 2 + .Math. k .Math. 1 } wherein .sup.k represents any one vector which is identical to a dimension of .sup.k; and represents a coefficient of a L.sub.1 norm; finally, respectively reconstruct {g.sup.I(k), k=1, . . . , K} according to the preliminary index matrix S by using the sparse coding coefficients {.sup.k, k=1, . . . , K} and the training sets {A.sup.I(k), k=1, . . . , K} to obtain K reconstruction errors and accumulate them, wherein a class with minimal accumulated reconstruction errors serves as the class of the test sample g; and custom character represents a class predicted for the test sample g, then: = arg min p { 1 , .Math. , P } .Math. i = S ( 1 , p ) S ( V , p ) .Math. g i - A p i p t .Math. 2 2 ; wherein .sub.p.sup.k represents a coefficient corresponding to the p.sup.th class in the sparse coding coefficients .sup.k corresponding to the selected k.sup.th three-dimensional Gabor feature g.sup.k; and the index matrix I is obtained by removing the repeated indexes from the preliminary index matrix S, then i{I(k), k=1, . . . , K}.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a flow chart of a Gabor cube feature selection-based classification method for hyperspectral remote sensing images provided by an embodiment of the present invention.

(2) FIG. 2 shows a three-dimensional Gabor filter provided by an embodiment of the present invention, viewed from three different aspects.

(3) FIG. 3 shows three three-dimensional Gabor features with different frequencies and directions parameter values provided by an embodiment of the present invention.

(4) FIG. 4 illustrates a detailed process for classifying test samples provided by an embodiment of the present invention.

(5) FIG. 5 is a schematic diagram of a Gabor cube feature selection-based classification system for hyperspectral remote sensing images provided by an embodiment of the present invention.

DESCRIPTION OF THE EMBODIMENTS

(6) In order to make objects, technical solution and advantages of the present invention be more apparent, the present invention will be described in further detail with reference to accompanying drawings and embodiments. It should be understood that specific embodiments described herein are merely illustrative of the present invention and are not intended to limit the present invention.

(7) The present invention relates to a technique for classifying ground objects by utilizing hyperspectral remote sensing images. The hyperspectral remote sensing images are multi-spectral image data obtained from a ground object of interest by remote sensors within visible, near-infrared, mid-infrared and thermal infrared bands of electromagnetic spectrums. The hyperspectral remote sensing images contain rich spatial, radiation and spectral triplex information, which exhibits a better effect in classifying the ground objects.

(8) Recently, a sparse representation-based classification (SRC) method exhibits a better effect in classifying the hyperspectral remote sensing images. The SRC performs sparse coding on test samples by using training samples, and then reconstructs the test samples by using the resulted coding coefficients to obtain residual errors for each class. The class with the smallest residual error serves as the class to which the test samples belong. The core concept of the SRC is that only samples in the same class can be represented with each other. Since the SRC classifies errors of each class by reconstructing the test samples and the reconstruction errors on multiple features may be superposed, a multi-task classification means using SRC on multiple features is proposed, which is called a multi-task joint sparse representation-based classification (MTJSRC) method for short. The method performs sparse representation on the test samples by using multiple features, then reconstruct original test samples by using the resulted sparse coding, then accumulates the reconstruction errors of each feature, and finally selects the class with minimal accumulated reconstruction errors as the class of the test samples. The method combines multiple features well at a stage of accumulating the reconstruction errors, and discriminates the class attribute by the accumulated reconstruction errors of the multiple features, thereby improving the robustness of classifying the reconstruction errors and obtaining better classification effect. The MTJSRC is successfully applied in face recognition, video tracking, disease diagnosis and other fields. Inspired by the MTJSRC, a multi-task joint collaborative representation-based classification (MTJCRC) method is proposed, which performs collaborative representation (CRC) on the test samples by using the training samples, and carries out similar superposition on the obtained collaborative coding coefficients and the sparse coding coefficients in the MTJSRC for classification. The core point of the collaborative representation is that the entire training sample set has a characterizing capacity for the test samples. The method uses multiple features, and improves the classification accuracy.

(9) The key to the multi-task classification means is how to efficiently and adequately use rich classification information provided by the multiple features. One method is to firstly fuse the multiple features, extract a group of features with more competent feature characterizing. Feature fusion may be performed in the classification process, or before the classification. The other method is to select a more representative feature from the original features. Whether it is feature fusion or feature selection, the redundant information among the multiple features should be able to be eliminated, and factors that are not conducive to the classification should be able to be removed, to make the features more characterizing.

(10) Based on the above-mentioned theory, the present invention proposes a Gabor cube feature selection-based classification (GS-MTJSRC) method for hyperspectral remote sensing images as shown in FIG. 1, comprising the following steps:

(11) S1, generating a number of three-dimensional Gabor filters according to set frequency and direction parameter values;

(12) S2, convoluting hyperspectral remote sensing images to be classified with the generated three-dimensional Gabor filters to obtain a number of three-dimensional Gabor features;

(13) S3, selecting a number of three-dimensional Gabor features, classification contribution degrees to various classes of which meet preset requirements, from the obtained three-dimensional Gabor features; and

(14) S4, classifying the hyperspectral remote sensing images by a multi-task joint sparse representation-based classification means by using the selected three-dimensional Gabor features.

(15) The present invention will be further described below with reference to FIG. 2 to FIG. 5.

(16) The step S1 particularly comprising:

(17) S11, setting frequency and direction parameter values, and then generating 64 three-dimensional Gabor filters according to the set parameter values;

(18) f.sub.t represents a frequency, {.sub.k,.sub.j} represents a direction, then the frequency is set as follows: f.sub.t=[0.5,0.25,0.125,0.0625], 16 directions are set as follows: {.sub.k,.sub.j}=[0,/4,/2,3/4], and then 64 three-dimensional Gabor filters are generated according to a formula

(19) f , , ( x , y , ) = 1 ( 2 ) 2 / 3 3 exp ( j 2 ( xu + yv + w ) ) exp ( - x 2 + y 2 + 2 2 2 ) ,
wherein u=f sin cos ,v=f sin sin ,w=f cos ; represents an included angle between a filter image and a axis, represents an included angle between the filter and plane; .sub.k represents a k.sup.th value of the , .sub.j represents a j.sup.th value of the ; (x,y,) respectively represents an x coordinate, a y coordinate and a spectrum coordinate of a pixel; represents a width of a Gaussian envelope;

(20) S12, removing 8 repeated filters in the 64 three-dimensional Gabor filters, wherein the remaining 52 filters serve as final three-dimensional Gabor filters and are represented by {.sub.i, i=1, . . . , 52}.

(21) As shown in FIG. 2, a three-dimensional Gabor filter of one [f=1, =0,=0] in spatial and spectral domains is shown from three different perspectives.

(22) The step S2 particularly comprising:

(23) S21, convoluting the hyperspectral remote sensing images with the 52 three-dimensional Gabor filters;

(24) S22, calculating amplitude from a result of convolution operation to obtain 52 three-dimensional Gabor features;

(25) R represents the hyperspectral remote sensing images, and M.sub.t represents the generated t.sup.th three-dimensional Gabor feature, then a calculating amplitude is represented as M.sub.t(x,y,)=|(R.Math..sub.t)(x,y,)|;

(26) the 52 three-dimensional Gabor features are represented as {M.sub.t, t=1, . . . , 52}. FIG. 3 shows three three-dimensional Gabor features M.sub.8, M.sub.17 and M.sub.21 with different frequencies and directions.

(27) The step S3 selecting three-dimensional Gabor features by utilizing a Fisher discriminant criterion, particularly comprising:

(28) S31, dividing each of the three-dimensional Gabor feature into a training set and a test set;

(29) A.sup.t represents a training set of the t.sup.th three-dimensional Gabor feature M.sub.1, G.sup.t represents a test set of the t.sup.th three-dimensional Gabor feature M.sub.t, here t{1, . . . , 52}, and therefore all training sets {A.sup.t, t=1, . . . , 52} and test sets {G.sup.t, t=1, . . . , 52} of the 52 three-dimensional Gabor features may be obtained;

(30) S32, respectively calculating an intra-class spacing and an inter-class spacing of each three-dimensional Gabor feature by using the training set (A.sup.t, t=1, . . . , 52;

(31) it will be illustrated by taking the t.sup.th three-dimensional Gabor feature as an example. .sub.W(A.sub.p.sup.t) represents an intra-class spacing of a p.sup.th class in the t.sup.th three-dimensional Gabor feature, and .sub.B(A.sub.p.sup.t) represents an inter-class spacing of the p.sup.th class in the t.sup.th three-dimensional Gabor feature, then:

(32) W ( A p t ) = .Math. i A p t ( i - m p t ) T ( i - m p t ) B ( A p t ) = n p ( m p t - m t ) T ( m p t - m t )

(33) wherein A.sub.p.sup.t represents a training set of the p.sup.th class of the training sets A.sup.t, more particularly, A.sup.t=[A.sub.1.sup.t, . . . , A.sub.p.sup.t], .sub.i represents one training sample in A.sub.p.sup.t. n.sub.p represents the number of the samples in the p.sup.th class, p{1, . . . , P}, P represents the number of the classes; m.sup.t and m.sub.p.sup.t respectively represent mean vectors

(34) m t = 1 n .Math. i A t i and m p t = 1 n p .Math. i A p t i
of A.sup.t and A.sub.p.sup.t;

(35) S33, calculating Fisher scores of the classes in the 52 three-dimensional Gabor features by using the intra-class spacing and the inter-class spacing calculated in the last step;

(36) the Fisher scores of the classes in the 52 three-dimensional Gabor features are stored by using a matrix F (Fcustom character.sup.52P), F.sub.p.sup.t represents a Fisher score of the p.sup.th class in the t.sup.ththree-dimensional Gabor feature, then:
F.sub.p.sup.t=.sub.B(A.sub.p.sup.t)/.sub.W(A.sub.p.sup.t)

(37) wherein Fcustom character.sup.52P, P represents the number of the classes, specifically, F.sub.p.sup.t represents elements in a t.sup.th line and a p.sup.th column of the matrix F;

(38) S34, selecting V elements with higher Fisher scores from each column of the matrix F, and storing indexes of the selected elements in a preliminary index matrix S, wherein Scustom character.sup.VP; repeated indexes in the preliminary index matrix are removed to obtain final indexes, and the finally indexes are stored in a index matrix I, Icustom character.sup.K1; and no repeated indexes exist in the index matrix I, and each index points to a specific three-dimensional Gabor feature. It can thus be seen that K three-dimensional Gabor features are selected from the 52 three-dimensional Gabor features.

(39) The step S4, classifying the hyperspectral remote sensing images by a multi-task joint sparse representation-based classification means by using the Gabor features selected in the above step. Particularly comprising the following steps:

(40) S41, dividing each selected three-dimensional Gabor feature into a training set and a test set;

(41) {A.sup.I(k), k=1, . . . , K} represents training sets corresponding to the K three-dimensional Gabor features, and {G.sup.I(k), k=1, . . . , K} represents corresponding test sets;

(42) S42, calculating sparse coding coefficients of three-dimensional Gabor features of test samples in the test sets on the training sets;

(43) g represents one test sample in the test sets, and the K three-dimensional Gabor features corresponding to it are represented as {g.sup.I(k), k=1, . . . , K}, and if the sparse coding coefficients of the K three-dimensional Gabor features {g.sup.I(k), k=1, . . . , K} of the test sample on the training sets {A.sup.k, k=1, . . . , K} are respectively represented as {.sup.k, k=1, . . . , K}, then:

(44) { k } = arg min k = { 1 , .Math. K } { .Math. g I ( k ) - A I ( k ) k .Math. 2 2 + .Math. k .Math. 1 } ;

(45) wherein .sup.k represents any one vector which is identical to a dimension of .sup.k; and represents a coefficient of a L.sub.1 norm;

(46) S43, respectively reconstructing {g.sup.I(k), k=1, . . . , K} according to the preliminary index matrix S by using the sparse coding coefficients {.sup.k, k=1, . . . , K} and the training sets {A.sup.I(k), k=1, . . . , K} to obtain K reconstruction errors and accumulating them, wherein a class with minimal accumulated reconstruction errors serves as the class of the test sample g; and custom character represents a class predicted for the test sample g, then:

(47) = arg min p { 1 , .Math. , P } .Math. i = S ( 1 , p ) S ( V , p ) .Math. g i - A p i p i .Math. 2 2 ;

(48) wherein .sub.p.sup.k represents a coefficient corresponding to the p.sup.th class in the sparse coding coefficients .sup.k corresponding to the selected k.sup.th three-dimensional Gabor feature g.sup.k; and the final index matrix I is obtained by removing the repeated indexes in the preliminary index matrix S, thus in the above formula, i{I(k), k=1, . . . , K}. FIG. 4 shows a detailed process for classifying test sample g.

(49) The present invention further provides a Gabor cube feature selection-based classification system for hyperspectral remote sensing images as shown in FIG. 5, comprising:

(50) a generation unit 1, which is configured to generate a number of three-dimensional Gabor filters according to set frequency and direction parameter values;

(51) a calculation unit 2, which is configured to convolute hyperspectral remote sensing images with the generated three-dimensional Gabor filters to obtain a number of three-dimensional Gabor features;

(52) a selection unit 3, which is configured to select a number of three-dimensional Gabor features, classification contribution degrees to various classes of which meet preset requirements, from the obtained three-dimensional Gabor features; and

(53) a classification unit 4, which is configured to classify the hyperspectral remote sensing images by a multi-task joint sparse representation-based classification means by using the selected three-dimensional Gabor features.

(54) Further, the generation unit 1 is particularly configured to:

(55) firstly, set frequency and direction parameter values, and then generate 64 three-dimensional Gabor filters according to the set parameter values;

(56) f.sub.t represents a frequency, {.sub.k, .sub.j} represents a direction, then the frequency is set as follows: f.sub.t=[0.5,0.25,0.125,0.0625], 16 directions are set as follows: {.sub.k,.sub.j}=[0,/4,/2,3/4].

(57) then the 64 three-dimensional Gabor filters are generated according to a formula

(58) f , , ( x , y , ) = 1 ( 2 ) 2 / 3 3 exp ( j 2 ( xu + yv + w ) ) exp ( - x 2 + y 2 + 2 2 2 ) ,
wherein u=f sin cos ,v=f sin sin ,w=f cos ; u, v, w correspond to three coordinate axes in a three-dimensional space; represents an included angle between a filter image and a axis, represents an included angle between the filter and plane; .sub.k represents a k.sup.th value of the , .sub.j represents a j.sup.th value of the , (x, y, ) respectively represents an x coordinate, a y coordinate and a spectrum coordinate of a pixel; represents a width of a Gaussian envelope;

(59) finally, remove 8 repeated filters in the 64 three-dimensional Gabor filters, wherein the remaining 52 filters serve as final three-dimensional Gabor filters and are represented by {.sub.i, i=1, . . . , 52}.

(60) The calculation unit 2 is particularly configured to:

(61) firstly, convolute hyperspectral remote sensing images with the generated 52 three-dimensional Gabor filters; and

(62) then calculate amplitude from a result of convolution operation to obtain 52 three-dimensional Gabor features;

(63) R represents the hyperspectral remote sensing images, and M.sub.t represents the generated t.sup.th three-dimensional Gabor feature, then the calculating amplitude is represented as M.sub.t(x,y,)=|(R.Math..sub.t)(x,y,)|, here, t{1, . . . , 52}, and thus, 52 three-dimensional Gabor features may be obtained, and represented by {M.sub.i, t=1, . . . , 52}.

(64) The selection unit 3 selects three-dimensional Gabor features by utilizing a Fisher discriminant criterion, with particular steps as follows:

(65) firstly, dividing each of the three-dimensional Gabor feature into a training set and a test set;

(66) A.sup.t represents a training set of the t.sup.th three-dimensional Gabor feature M.sub.t, G.sup.t represents a test set of the t.sup.th three-dimensional Gabor feature M.sub.t, here t{1, . . . , 52}, and therefore all training sets {A.sup.t, t=1, . . . , 52} and test sets {G.sup.t, t=1, . . . , 52} of the 52 three-dimensional Gabor features may be obtained;

(67) then, respectively calculating an intra-class spacing and an inter-class spacing of each three-dimensional Gabor feature by using the training set {A.sup.t, t=1, . . . , 52};

(68) it will be illustrated by taking the t.sup.th three-dimensional Gabor feature as an example, .sup.W(A.sub.p.sup.t) represents an intra-class spacing of a p.sup.th class in the t.sup.th three-dimensional Gabor feature, and .sub.B(A.sub.p.sup.t) represents an inter-class spacing of the p.sup.th class in the t.sup.th three-dimensional Gabor feature, then:

(69) W ( A p t ) = .Math. i A p t ( i - m p t ) T ( i - m p t ) B ( A p t ) = n p ( m p t - m t ) T ( m p t - m t )

(70) wherein A.sub.p.sup.t represents a training set of the p.sup.th class of the training sets A.sub.p.sup.t, more particularly, A.sup.t=[A.sub.i.sup.t, . . . , A.sub.p.sup.t], .sub.i represents one training sample in A.sub.p.sup.t. n.sub.p represents the number of the samples in the p.sup.th class, p{1, . . . , P}, P represents the number of the classes; m.sup.t and m.sub.p.sup.t respectively represent mean vectors

(71) m t = 1 n .Math. i A t i and m p t = 1 n p .Math. i A p t i
of A.sup.t and A.sub.p.sup.t;

(72) finally, calculating Fisher scores of the classes in the 52 three-dimensional Gabor features by using the intra-class spacing and the inter-class spacing calculated in the last step;

(73) the Fisher scores of the classes in the 52 three-dimensional Gabor features are stored by using a matrix F (Fcustom character.sup.52P), F.sub.p.sup.t represents a Fisher score of the p.sup.th class in the t.sup.th three-dimensional Gabor feature, then:
F.sub.p.sup.t=.sub.B(A.sub.p.sup.t)/.sub.W(A.sub.p.sup.t):

(74) wherein Fcustom character.sup.52P, P represents the number of the classes, actually, F.sub.p.sup.t represents elements in a t.sup.th line and a p.sup.th column of the matrix F;

(75) finally, selecting V elements with higher scores from each column of the matrix F, and storing indexes of the selected elements in a preliminary index matrix, wherein Scustom character.sup.VP; repeated indexes in the preliminary index matrix are removed to obtain finally-selected indexes, and the finally-selected indexes are stored in a index matrix I, Icustom character.sup.K1; and no repeated indexes exists in the index matrix L and each index points to a specific three-dimensional Gabor feature. It can thus be seen that K three-dimensional Gabor features are selected from the 52 three-dimensional Gabor features.

(76) The classification unit 4 performs classification by a multi-task joint sparse representation-based classification means by using the three-dimensional Gabor features selected in the above steps, with particular steps as follows:

(77) firstly, dividing each selected three-dimensional Gabor feature into a training set and a test set;

(78) {A.sup.I(k), k=1, . . . , K} represents training sets corresponding to the K three-dimensional Gabor features, and {G.sup.I(k), k=1, . . . , K} represents corresponding test sets;

(79) then, calculating sparse coding coefficients of three-dimensional Gabor features of test samples in the test sets on the training sets;

(80) g represents one test sample, and the K three-dimensional Gabor features corresponding to it are represented as {g.sup.I(k), k=1, . . . , K}, and if the sparse coding coefficients of the K three-dimensional Gabor features {g.sup.I(k), k=1, . . . , K} of the test sample on the training sets {A.sup.k, k=1, . . . , K} are respectively represented as {.sup.k, k=1, . . . , K}, then:

(81) { k } = arg min k = { 1 , .Math. K } { .Math. g I ( k ) - A I ( k ) k .Math. 2 2 + .Math. k .Math. 1 } ;

(82) wherein .sup.k represents any one vector which is identical to a dimension of .sup.k and represents a coefficient of a L.sub.1 norm;

(83) finally, respectively reconstructing {g.sup.I(k), k=1, . . . , K} according to the preliminary index matrix S by using the sparse coding coefficients {.sup.k, k=1, . . . , K} and the training sets {A.sup.I(k), k=1, . . . , K} to obtain K reconstruction errors and accumulating them, wherein a class with minimal accumulated reconstruction errors serves as the class of the test sample g; and custom character represents a class predicted for the test sample g, then:

(84) 0 = arg min p { 1 , .Math. , P } .Math. i = S ( 1 , p ) S ( V , p ) .Math. g i - A p i p i .Math. 2 2

(85) wherein .sub.p.sup.k represents a coefficient corresponding to the p.sup.th class in the sparse coding coefficients .sup.k corresponding to the selected k.sup.th three-dimensional Gabor feature g.sup.k; and the final index matrix I is obtained by removing the repeated indexes in the preliminary index matrix S, and thus in the above formula, i{I(k), k=1, . . . , K}.

(86) Compared with the prior art, the present invention can bring the following beneficial effects:

(87) 1, in view of the shortcoming that the prior art is not suitable for linear fusion due to large property difference in color, shape and textural features, the present invention adopts three-dimensional Gabor features suitable for fusion, and the used three-dimensional Gabor features contain rich local change information of a signal, and are competent in characterizing,

(88) 2, in view of the shortcoming that information which is not conducive to the classification is produced in a feature fusion process, the present invention selects less three-dimensional Gabor features which are more characterizing for each class by using a Fisher discriminant criterion, not only makes full use of high-level semantics hidden among the features, but also eliminates redundant information, and reduces the classification time complexity;

(89) 3, in view of the shortcoming that a weighting coefficient of fusing the reconstruction errors of collaborative representation is difficult to determine, the three-dimensional Gabor features selected by the Fisher discriminant criterion are more competent in the feature characterizing in the present invention, without learning weights to fuse;

(90) 4, in view of the shortcoming that training a sample covariance matrix on small samples by a collaborative representation method is irreversible, the present invention uses the sparse coding, so that there is no problem that the classification accuracy of the small samples is reduced due to unstable coding coefficients.

(91) Experiments showed that the present invention may obtain higher classification accuracy than the conventional multi-task classification means. The experiments used three true hyperspectral datasets. The first dataset was obtained by a visible/infrared imaging spectrometer sensor at the Kennedy Space Center (KSC), Florida, on Mar. 23, 1996. The dataset has original 224 bands, 48 bands are removed due to water absorption and low signal-to-noise ratio, and the remaining 176 bands are reserved. Each band has an image size of 421444. The KSC dataset consists of 13 classes of ground objects, containing 5211 labeled samples. The second dataset was pictured and captured by a R0SIS-03 sensor at the Pavia University (PaviaU), Italy. The dataset remained 103 bands after 12 noise bands were removed. Each band had a size of 610340 and a spatial resolution of 1.3 m/pixel. There were 42,776 labeled samples in the dataset, a total of 9 classes of ground objects. The third dataset was pictured and captured by the National Science Foundation Funding Center at the Houston University and its surrounding region, Houston, on Jun. 23, 2012, by using airborne laser mapping. The dataset has 144 bands, each band has an image size of 3491905 and a spatial resolution of 2.5 m/pixel. There were 15,029 labeled samples in the dataset, a total of 15 classes of ground objects.

(92) In the experiment process, the three-dimensional Gabor features of each data set were extracted firstly, and then the three-dimensional Gabor features with more competent characterizing were selected from these features. The selected three-dimensional Gabor features were divided into training sets with sizes of 3, 5, 10, 15, 20 for each class, and the remaining three-dimensional Gabor features were corresponding test sets. In order to demonstrate the validity of the present invention, the classification accuracy of the present invention is compared with the accuracy of the conventional method. Experimental results show that the classification accuracy of the present invention is much higher than that of the conventional method. By taking the KSC dataset as an example, the classification accuracy of the MTJSRC-GS reaches 92.48% when each class has 3 training samples, while the classification accuracy of the conventional MTJCRC is 83.70% and the classification accuracy of the MTJSRC is 89.34%.

(93) Particularly, in an actual use process:

(94) Types of the filters may be adjusted in the step S1, for instance, a three-dimensional Log-Gabor filter; and frequency and angle parameters of the three-dimensional Gabor filters can also be adjusted;

(95) a three-dimensional Gabor feature selection method in the step S3 may be replaced by other methods, and the number of the selected three-dimensional Gabor features may also be adjusted; and

(96) a classification (SRC) method in the step S4 may be replaced by other methods, for instance, CRC.

(97) The foregoing is merely illustrative of preferred embodiments of the present invention and is not intended to limit the present invention, and any modifications, equivalent substitutions and improvements within the spirit and principles of the present invention are intended to be encompassed within a protective scope of the present invention.