Image quality objective evaluation method based on manifold feature similarity

20170177975 ยท 2017-06-22

Assignee

Inventors

Cpc classification

International classification

Abstract

An image quality objective evaluation method based on manifold feature similarity is disclosed, which firstly adopts visual salience and visual threshold to remove image blocks which are unimportant to visual perception, namely, uses roughing selection and fine selection; and then utilizes the best mapping matrix after block selection to extract manifold feature vectors of image blocks which are selected from original undistorted natural scene images and distorted images to be evaluated; and then measures the structural distortion of distorted images according to manifold feature similarity; and then considers effects of image brightness changes on human eyes and obtains the brightness distortion of distorted images based on an average value of image blocks, and finally obtains quality scores according to structural distortion and brightness distortion; which allows the method of the present invention to have a higher evaluation accuracy, and also expands the evaluation capacity to various distortions.

Claims

1. An image quality objective evaluation method based on manifold feature similarity comprising steps of: (1) selecting a plurality of undistorted natural scene images; and then dividing every undistorted natural scene image into non-overlapping image blocks, each of which having a size of 88; and then randomly selecting N image blocks from all image blocks of all undistorted natural scene images, taking every selected image block as a training sample, recording a i.sup.th training sample as X.sub.i, wherein 5000N20000, 1iN; and then arranging color values of R, G and B channels of all pixel points in every training sample for forming a color vector, recording the color vector formed by arranging color values of R, G and B channels of all pixel points in X.sub.i as X.sub.i.sup.col, wherein a dimension of X.sub.i.sup.col is 1921, values from a 1.sup.st element to a 64.sup.th element in X.sub.i.sup.col are respectively corresponding to color values of the R channel of every pixel point in X.sub.i obtained by a way of progressive scanning, values from a 65.sup.th element to a 128.sup.th element in X.sub.i.sup.col are respectively corresponding to color values of the G channel of every pixel point in X.sub.t obtained by a way of progressive scanning, values from a 129.sup.th element to a 192.sup.nd element in X.sub.i.sup.col are respectively corresponding to color values of the B channel of every pixel point in X obtained by a way of progressive scanning; and then subtracting an average value of the values of all elements in a corresponding color vector from a value of every element in the corresponding color vector in every training sample, so as to centralizedly treat the corresponding color vector in every training sample, recording the centralizedly treated color vector in X.sub.i.sup.col as {circumflex over (x)}.sub.i.sup.col; and finally recording a matrix formed by all centralizedly treated color vectors as X, here X=[{circumflex over (x)}.sub.1.sup.col, {circumflex over (x)}.sub.2.sup.col, . . . , {circumflex over (x)}.sub.N.sup.col], wherein a dimension of X is 192N, {circumflex over (x)}.sub.1.sup.col, {circumflex over (x)}.sub.2.sup.col, . . . , {circumflex over (x)}.sub.N.sup.col , respectively represent a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a 1.sup.st training sample, a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a 2.sup.nd training sample, . . . , and a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a N.sup.th training sample, and a symbol [ ] is a vector representation symbol; (2) reducing the dimension of X and whitening X by a principal components analysis (PCA), recording a dimensional reduced and whitened matrix as X.sup.W, wherein a dimension of X.sup.W is MN, M is a preset low-dimensional dimension, 1<M<192; (3) training N column vectors in X.sup.W by an existing orthogonal locality preserving projection (OLPP) algorithm for obtaining a best mapping matrix J.sup.W of 8 orthogonal bases in X.sup.W, wherein a dimension of J.sup.W is 8M; and then calculating a best mapping matrix of the original sample space according to J.sup.W and the whitening matrix, recording the best mapping matrix of the original sample space as J, J=J.sup.WW, wherein a dimension of J is 8192, W represents the whitening matrix, a dimension of W is M192; (4) regarding I.sub.org as an original undistorted natural scene image, regarding I.sub.dis as a distorted image of I.sub.org, regarding I.sub.dis as a distorted image to be evaluated; and then respectively dividing I.sub.org and I.sub.dis into non-overlapping image blocks, each of which having a size of 88, recording a i.sup.th image block in I.sub.org as x.sub.j.sup.ref, recording a j.sup.th image block in I.sub.dis as x.sub.j.sup.dis, wherein 1jN, N represents an amount of the image blocks in I.sub.org, and also represents an amount of the image blocks in I.sub.dis; and then arranging color values of R, G and B channels of all pixel points of every image block in I.sub.org for forming a color vector, recording the color vector formed by the color values of the R, G and B channels of all pixel points in x.sub.j.sup.ref as x.sub.j.sup.ref,col, arranging color values of R, G and B channels of all pixel points of every image block in I.sub.dis for forming a color vector, recording the color vector formed by the color values of the R, G and B channels of all pixel points in x.sub.j.sup.dis as x.sub.j.sup.dis,col, wherein a dimension of x.sub.j.sup.ref,col is 1921, values from a 1.sup.st element to a 64.sup.th element in x.sub.j.sup.ref,col are respectively corresponding to color values of the R channel of every pixel point in x.sub.j.sup.ref obtained by a way of progressive scanning, values from a 65.sup.th element to a 128.sup.th element in x.sub.j.sup.ref,col are respectively corresponding to color values of the G channel of every pixel point in x.sub.j.sup.ref obtained by a way of progressive scanning, values from a 129.sup.th element to a 192.sup.nd element in x.sub.j.sup.ref,col are respectively corresponding to color values of the B channel of every pixel point in x.sub.7.sup.ref obtained by a way of progressive scanning; values from a 1.sup.st element to a 64.sup.th element in x.sub.j.sup.dis,col are respectively corresponding to color values of the R channel of every pixel point in x.sub.j.sup.dis obtained by a way of progressive scanning, values from a 65.sup.th element to a 128.sup.th element in x.sub.j.sup.dis,col are respectively corresponding to color values of the G channel of every pixel point in x.sub.j.sup.dis obtained by a way of progressive scanning, values from a 129.sup.th element to a 192.sup.nd element in x.sub.j.sup.dis,col are respectively corresponding to color values of the B channel of every pixel point in x.sub.j.sup.dis obtained by a way of progressive scanning; and then subtracting an average value of the values of all elements in a corresponding color vector from a value of every element in the corresponding color vector of every image block in I.sub.org, so as to centralizedly treat the corresponding color vector of every image block in I.sub.org, recording the centralizedly treated color vector in x.sub.j.sup.ref,col as {circumflex over (x)}.sub.j.sup.ref,col, subtracting an average value of the values of all elements in a corresponding color vector from a value of every element in the corresponding color vector of every image block in I.sub.dis, so as to centralizedly treat the corresponding color vector of every image block in I.sub.dis, recording the centralizedly treated color vector in x.sub.j.sup.dis,col as {circumflex over (x)}.sub.j.sup.dis,col; and finally recording a matrix formed by all centralizedly treated color vectors in I.sub.org as X.sup.ref, here x.sup.ref=[{circumflex over (x)}.sub.1.sup.ref,col, {circumflex over (x)}.sub.2.sup.ref,col, . . . , {circumflex over (x)}.sub.N.sup.ref,col], recording a matrix formed by all centralizedly treated color vectors in I.sub.dis as X.sup.dis here X.sup.dis=[{circumflex over (x)}.sub.1.sup.dis,col, {circumflex over (x)}.sub.2.sup.dis,col, . . . , {circumflex over (x)}.sub.N.sup.dis,col], wherein a dimension of X.sup.ref and X.sup.dis is 192N, {circumflex over (x)}.sub.1.sup.ref,col, {circumflex over (x)}.sub.2.sup.ref,col, . . . , {circumflex over (x)}.sub.N.sup.ref,col respectively represent a centralizedly treated color vector of color values of R, G and B channels of all pixel points of a 1.sup.st image block in I.sub.org, a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a 2.sup.nd image block in I.sub.org, . . . , and a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a (N).sup.th image block in I.sub.org; {circumflex over (x)}.sub.1.sup.dis,col, {circumflex over (x)}.sub.2.sup.dis,col, . . . , {circumflex over (x)}.sub.N.sup.dis,col respectively represent a centralizedly treated color vector of color values of R, G and B channels of all pixel points of a 1.sup.st image block in I.sub.dis, a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a 2.sup.nd image block in I.sub.dis, . . . , and a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a (N).sup.th image block in I.sub.dis; and a symbol [ ] is a vector representation symbol; (5) calculating structural differences between every column vector in X.sup.ref and a corresponding column vector in X.sup.dis, recording the structural differences between {circumflex over (x)}.sub.j.sup.ref,col and {circumflex over (x)}.sub.j.sup.dis,col as AVE({circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col); and then arranging the obtained N structural differences in sequence for forming a vector with a dimension of 1N, recording the vector as v, wherein a value of a j.sup.th element is v.sub.j, here, v.sub.j=AVE({circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col); and then obtaining a roughing selection undistorted image block set and a roughing selection distorted image block set, which specifically comprises steps of: (A) designing an image block roughing selection threshold; (B) extracting elements whose values are larger than or equal to TH.sub.1 from v; and (C) taking a set formed by image blocks corresponding to the extracted elements in I.sub.org as the roughing selection undistorted image block set, recording the roughing selection undistorted image block set as Y.sup.ref, here, Y.sup.ref={x.sub.j.sup.ref|AVE({circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col)TH.sub.1, 1jN}; taking a set formed by image blocks corresponding to the extracted elements in I.sub.dis as the roughing selection distorted image block set, recording the roughing selection distorted image block set as Y.sup.dis, here, Y.sup.dis={x.sub.j.sup.dis|AVE({circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col)TH.sub.1, 1jN}; and then obtaining a fine selection undistorted image block set and a fine selection distorted image block set, which specifically comprises steps of: (a) respectively calculating saliency maps of I.sub.org and I.sub.dis using saliency detection based-on simple priors (SDSP) and recording as f.sup.ref and f.sup.dis; (b) respectively dividing f.sup.ref and f.sup.dis into non-overlapping image blocks, each of which having a size of 88; (c) calculating an average value of pixel values of all pixel points of every image block in f.sup.ref, recording an average value of pixel values of all pixel points of a j.sup.th image block in f.sup.ref as vs.sub.j.sup.ref; calculating an average value of pixel values of all pixel points of every image block in f.sup.dis, recording an average value of pixel values of all pixel points of a j.sup.th image block in f.sup.dis as vs.sub.j.sup.dis, wherein 1jN; (d) obtaining a maximum value between the average value of pixel values of all pixel points of every image block in f.sup.ref and the average value of pixel values of all pixel points of every image block in f.sup.dis recording a maximum value between vs.sub.j.sup.ref and vs.sub.j.sup.dis as vs.sub.j,max, here, vs.sub.j,max=max(vs.sub.j.sup.ref, vs.sub.j.sup.dis), wherein max( ) is a maximum value function; and (e) finely selecting partial images from the roughing selection undistorted image block set as fine selection undistorted image blocks for forming a fine selection undistorted image block set, recording the fine selection undistorted image block set as Y.sup.%ref, here, Y.sup.%ref={x.sub.j.sup.ref|AVE({circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col)TH.sub.1 and vs.sub.j,maxTH.sub.2, 1jN}; finely selecting partial images from the roughing selection distorted image block set as fine selection distorted image blocks for forming a fine selection distorted image block set, recording the fine selection distorted image block set as Y.sup.%dis, here, Y.sup.%dis={x.sub.j.sup.dis|AVE({circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col)TH.sub.1and vs.sub.j,maxTH.sub.2, 1jN}, wherein TH.sub.2 is a designed image block fine selection threshold; (6) calculating manifold feature vectors of every image block in the fine selection undistorted image block set, recording a t.sup.th manifold feature vector in the fine selection undistorted image block set as r.sub.t , here, r.sub.t=J{circumflex over (x)}.sub.j.sup.ref,col; calculating manifold feature vectors of every image block in the fine selection distorted image block set, recording a t.sup.th manifold feature vector in the fine selection distorted image block set as d.sub.t, here, d.sub.t=J{circumflex over (x)}.sub.t.sup.dis,col, wherein 1tK , K represents an amount of image blocks in the fine selection undistorted image block set and also represents an amount of image blocks in the fine selection distorted image block set, a dimension of r.sub.t and d.sub.t is 81, {circumflex over (x)}.sub.t.sup.ref,col represents a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a t.sup.th image block of the fine selection undistorted image block set, and {circumflex over (x)}.sub.t.sup.dis,col represents a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a t.sup.th image block of the fine selection distorted image block set; and then defining manifold feature vectors of all image blocks in the fine selection undistorted image block set as a matrix, recording the matrix as R; defining manifold feature vectors of all image blocks in the fine selection distorted image block set as a matrix, recording the matrix as D, wherein a dimension of R and D is 8K , a t.sup.th column vector in R is r.sub.t, a t.sup.th column vector in D is d.sub.t; and then calculating manifold feature similarities of I.sub.org, and I.sub.dis, recording the manifold feature similarities as MFS.sub.1, here, MFS 1 = 1 8 K .Math. .Math. m = 1 8 .Math. .Math. t = 1 K .Math. 2 .Math. R m , t .Math. D m , t + C 1 ( R m , t ) 2 + ( D m , t ) 2 + C 1 , wherein R.sub.m,t represents a value of M.sup.th row and t.sup.th column in R, D.sub.m,t represents a value of M.sup.th row and t.sup.th column in D, C.sub.1 is a very small constant for ensuring a result stability; (7) calculating brightness similarities of I.sub.org and I.sub.dis, recording the brightness similarities as MFS.sub.2, here, MFS 2 = .Math. t = 1 K .Math. ( t ref - _ ref ) ( t dis - _ dis ) + C 2 .Math. t = 1 K .Math. ( t ref - _ ref ) 2 .Math. t = 1 K .Math. ( t dis - _ dis ) 2 + C 2 .Math. , wherein .sub.t.sup.ref represents an average value of brightness values of all pixel points in a t.sup.th image block in the fine selection undistorted image block set, _ ref = .Math. t = 1 K .Math. t ref K ; .sub.t.sup.dis represents an average value of brightness values of all pixel points in a t.sup.th image block in the fine selection distorted image block set, _ dis = .Math. t = 1 K .Math. t dis K , C.sub.2 is a very small constant; and (8) linearly weighting MFS.sub.1 and MFS.sub.2 for obtaining mass fractions of I.sub.dis, recording the mass fractions as MFS, here, MFS=MFS.sub.2+(1)MFS.sub.1, wherein is adapted for adjusting a relative importance of MFS.sub.1 and MFS.sub.2, 0<<1.

2. The image quality objective evaluation method based on manifold feature similarity, as recited in claim 1, wherein in step (2), an acquisition method of X.sup.W comprises steps of: (2A) calculating a covariance matrix of X and recording the covariance matrix as C, C = 1 N .Math. ( X X T ) , wherein a dimension of C is 192192, X.sup.T is a transposed matrix of X; (2B) eigenvalue-decomposing C based on prior art for obtaining an eigenvalue diagonal matrix and an eigenvector matrix, respectively recording the eigenvalue diagonal matrix and the eigenvector matrix as and E, wherein a dimension of is 192192, = [ 1 0 .Math. 0 0 2 .Math. 0 M M M M 0 0 .Math. 192 ] , .sub.1, .sub.2 and .sub.192 respectively represent a 1.sup.st eigenvalue, a 2.sup.nd eigenvalue and a 192.sup.nd eigenvalue after decomposition, a dimension of E is 192192, E=[e.sub.1 e.sub.2 e.sub.192], e.sub.1, e.sub.2 and e.sub.192 respectively represent a 1.sup.st eigenvector, a 2.sup.nd eigenvector and a 192.sup.nd eigenvector after decomposition, a dimension of e.sub.1, e.sub.2 and e.sub.192 is 1921; (2C) calculating a whitening matrix and recording the whitening matrix as W, W=.sub.M192.sup.1/2E.sup.T, wherein a dimension of W is M192, M 192 - 1 2 = [ 1 / 1 0 .Math. 0 .Math. 0 0 1 / 2 .Math. 0 .Math. 0 M M M M M M 0 0 .Math. 1 / M .Math. 0 ] , .sub.M represents a M.sup.th eigenvalue after decomposition, M is a preset low-dimensional dimension, 1<M<192, E.sup.T is a transposed matrix of E; and (2D) calculating the dimension-reduced and whitened matrix X.sup.W wherein X.sup.W=WX.

3. The image quality objective evaluation method based on manifold feature similarity, as recited in claim 1, wherein in the step (5), AVE ( x ^ j ref , col , x ^ j dis , col ) = .Math. .Math. g = 1 192 .Math. ( x ^ j ref , col ( g ) ) 2 - .Math. g = 1 192 .Math. ( x ^ j dis , col ( g ) ) 2 .Math. , here, a symbol | | is an absolute value symbol, {circumflex over (x)}.sub.j.sup.ref,col (g) represents a value of a g.sup.th element in {circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col (g) represents a value of a g.sup.th element in {circumflex over (x)}.sub.j.sup.dis,col.

4. The image quality objective evaluation method based on manifold feature similarity, as recited in claim 2, wherein in the step (5), AVE ( x ^ j ref , col , x ^ j dis , col ) = .Math. .Math. g = 1 192 .Math. ( x ^ j ref , col ( g ) ) 2 - .Math. g = 1 192 .Math. ( x ^ j dis , col ( g ) ) 2 .Math. , here, a symbol | | is an absolute value symbol, {circumflex over (x)}.sub.j.sup.ref,col(g) represents a value of a g.sup.th element in {circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col(g) represents a value of a g.sup.th element in {circumflex over (x)}.sub.j.sup.dis,col.

5. The image quality objective evaluation method based on manifold feature similarity, as recited in claim 3, wherein in the step (A) of the step (5), TH.sub.1=median(v), here, median( ) is a median selection function, median(v) represents selecting a mid-value of values of all elements in v.

6. The image quality objective evaluation method based on manifold feature similarity, as recited in claim 4, wherein in the step (A) of the step (5), TH.sub.1=median(v), here, median( ) is a median selection function, median(v) represents selecting a mid-value of values of all elements in v.

7. The image quality objective evaluation method based on manifold feature similarity, as recited in claim 3, wherein in the step (e) of the step (5), a value of TH.sub.2 is a maximum value at a former 60% position after arranging all maximum values obtained in the step (d) from big to small.

8. The image quality objective evaluation method based on manifold feature similarity, as recited in claim 4, wherein in the step (e) of the step (5), a value of TH.sub.2 is a maximum value at a former 60% position after arranging all maximum values obtained in the step (d) from big to small.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0032] The drawing is an overall implementation block diagram of an image quality objective evaluation method based on manifold feature similarity of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

[0033] The present invention is further described in detail accompanying with drawings and embodiments.

[0034] An excellent image quality elevation method should well reflect human visual perception characteristics. For visual perception phenomena, studies show that the manifold is the basis of perception, human perception is based on cognitive manifold and topology continuity, namely, human perception is limited to low-dimensional manifolds, and the brain perceives things by a way of manifold. In general, neuronal group activities in the brain are able to be described as an aggregate result of neural discharge rates, and therefore they are able to be represented by a point in the abstract space with a dimension equal to the number of neurons. Studies found that the discharge rate of every neuron in a neuronal population is able to be represented by a smoothing function of few variables, which shows that neuronal group activities are limited to low-dimensional manifolds. Therefore, image manifold characteristics are applied to the visual quality evaluation for obtaining evaluation results having higher consistency with the subjectively perceptive quality. However, manifold learning is able to better help to find intrinsic geometric structures of images in low-dimensional manifolds for representing the nonlinear manifold essence of things.

[0035] According to human visual characteristics perceived by a way of manifold and the manifold learning theory, the present invention provides an image quality objective evaluation method based on manifold feature similarity (MFS). During the training stage, MFS utilizes the manifold learning orthogonal locality preserving projection algorithm to obtain the best mapping matrix for extracting manifold features of images. During the quality prediction stage, undistorted natural scene images and distorted images are divided into image blocks, and then the mean value of every image block is removed such that color vectors corresponding to all image blocks have zero-mean, and then the MFS is calculated based on the previous condition. However, the average value of all image blocks is used to calculate the luminance similarity. Here, the MFS represents the structural difference between two images, and the luminance similarity measures the brightness distortion of distorted images. Finally, the two similarities are balanced for obtaining the overall visual quality of distorted images.

[0036] The drawing is an overall implementation block diagram of an image quality objective evaluation method based on manifold feature similarity of the present invention. The image quality objective evaluation method comprises steps of:

[0037] (1) selecting a plurality of undistorted natural scene images; and then dividing every undistorted natural scene image into non-overlapping image blocks, each of which having a size of 88; and then randomly selecting N image blocks from all image blocks of all undistorted natural scene images, taking every selected image block as a training sample, recording a i.sup.t h training sample as X.sub.i, wherein 5000N20000, 1iN; and then arranging color values of R, G and B channels of all pixel points in every training sample for forming a color vector, recording a color vector formed by arranging color values of R, G and B channels of all pixel points in X.sub.i as X.sub.i.sup.col, wherein a dimension of X.sub.t.sup.col is 1921, values from a 1.sup.st element to a 64.sup.th element in X.sub.i.sup.col are respectively corresponding to color values of the R channel of every pixel point in X.sub.i obtained by a way of progressive scanning, that is to say, that a value of the 1.sup.st element in X.sub.i.sup.col is corresponding to a color value of a R channel of a pixel point at 1.sup.st row, 1.sup.st column in X.sub.i, a value of a 2.sup.nd element in X.sub.i.sup.col is corresponding to a color value of a R channel of a pixel point at 1.sup.st row, 2.sup.nd column in X.sub.i and so on; values from a 65.sup.th element to a 128.sup.th element in X.sub.i.sup.col are respectively corresponding to color values of the G channel of every pixel point in X.sub.i obtained by a way of progressive scanning, that is to say, that a value of the 65.sup.th element in X.sub.i.sup.col is corresponding to a color value of a G channel of a pixel point at 1.sup.st row, 1.sup.st column in X.sub.i, a value of a 66.sup.th element in X.sub.i.sup.col is corresponding to a color value of a G channel of a pixel point at 1.sup.st row, 2.sup.nd column in X.sub.i and so on; values from a 129.sup.th element to a 192.sup.nd element in X.sub.i.sup.col are respectively corresponding to color values of the B channel of every pixel point in X.sub.i obtained by a way of progressive scanning, that is to say, that a value of the 129.sup.th element in X.sub.i.sup.col is corresponding to a color value of a B channel of a pixel point at 1.sup.st row, 1.sup.st column in X.sub.i, a value of a 130.sup.th element in X.sub.i.sup.col is corresponding to a color value of a B channel of a pixel point at 1.sup.st row, 2.sup.nd column in X.sub.i and so on; and then subtracting an average value of the values of all elements in a corresponding color vector from a value of every element in the corresponding color vector in every training sample, so as to centralizedly treat the corresponding color vector in every training sample, recording the centralizedly treated color vector in X.sub.1.sup.col as {circumflex over (x)}.sub.i.sup.col, wherein a value of every element in {circumflex over (x)}.sub.i.sup.col is that a value of an element at a corresponding position in x.sub.i.sup.col minus an average value of values of all elements in x.sub.i.sup.col and finally recording a matrix formed by all centralizedly treated color vectors as X, here X=[{circumflex over (x)}.sub.1.sup.col, {circumflex over (x)}.sub.2.sup.col, . . . , {circumflex over (x)}.sub.N.sup.col ], wherein a dimension of X is 192N, {circumflex over (x)}.sub.1.sup.col, {circumflex over (x)}.sub.2.sup.col, . . . , {circumflex over (x)}.sub.N.sup.col respectively represent a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a 1.sup.st training sample, a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a 2.sup.nd training sample, . . . , and a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a N.sup.th training sample, and a symbol [ ] is a vector representation symbol;

[0038] herein, sizes of the plurality of undistorted natural scene images are all the same, or different from each other, or part of the same; while specifically implementing, ten undistorted natural scene images are selected; a value range of N is determined through a lot of experiments, if a value of N is too small (such as smaller than 5000), namely, an amount of the image blocks is fewer, a training accuracy will be greatly affected, if a value of N is too big (such as bigger than 20000), namely, the amount of the image blocks is more, the training accuracy will be improved less, but a computational complexity will be increased more, and therefore, the value range of N is limited to 5000N20000, while specifically implementing, N=20000; due to R, G and B channels of a color image, a color vector of every training sample has a length of 883=192;

[0039] (2) reducing the dimension of X and whitening X by a principal components analysis (PCA), recording a dimensional reduced and whitened matrix as X.sup.W, wherein a dimension of X.sup.W is MN, M is a preset low-dimensional dimension, 1<M<192, in this embodiment, M=8; wherein an acquisition method of X.sup.W comprises steps of:

[0040] (2A) calculating a covariance matrix of X and recording the covariance matrix as C,

[00009] C = 1 N .Math. ( X X T ) ,

wherein a dimension of C is 192192, X.sup.T is a transposed matrix of X;

[0041] (2B) eigenvalue-decomposing C based on prior art for obtaining an eigenvalue diagonal matrix and an eigenvector matrix, respectively recording the eigenvalue diagonal matrix and the eigenvector matrix as and E, wherein a dimension of is 192192,

[00010] = [ 1 0 .Math. 0 0 2 .Math. 0 M M M M 0 0 .Math. 192 ] ,

.sub.1, .sub.2 and .sub.192 respectively represent a 1.sup.st eigenvalue, a 2.sup.nd eigenvalue and a 192.sup.nd eigenvalue after decomposition, a dimension of E is 192192, E=[e.sub.1 e.sub.2 . . . e.sub.192], e.sub.1, e.sub.2 and e.sub.192 respectively represent a 1.sup.st eigenvector, a 2.sup.nd eigenvector and a 192.sup.nd eigenvector after decomposition, a dimension of e.sub.1, e.sub.2 and e.sub.192 is 1921;

[0042] (2C) calculating a whitening matrix and recording the whitening matrix as W, W=.sub.M192.sup.1/2E.sup.T, wherein a dimension of W is M192,

[00011] M 192 - 1 2 = [ 1 / 1 0 .Math. 0 .Math. 0 0 1 / 2 .Math. 0 .Math. 0 M M M M M M 0 0 .Math. 1 / M .Math. 0 ] ,

.sub.M represents a M.sup.th eigenvalue after decomposition, .sub.M192 is a matrix formed by a former M rows in , namely,

[00012] M 192 = [ 1 0 .Math. 0 .Math. 0 0 2 .Math. 0 .Math. 0 M M M M M M 0 0 .Math. M .Math. 0 ] ,

M is a preset low-dimensional dimension, 1<M<192, in this embodiment, M=8; in the experiment, the former 8 rows in (.sub.M192.sup.1/2), namely, the former 8 principal components are adapted for training, that is to say, the dimension of X after reducing the dimension and whitening is reduced from 192 to 8, E.sup.T is a transposed matrix of E; and

[0043] (2D) calculating the dimension-reduced and whitened matrix X.sup.W wherein X.sup.W=WX;

[0044] (3) training N column vectors in X.sup.W by an existing orthogonal locality preserving projection (OLPP) algorithm for obtaining a best mapping matrix J.sup.W of 8 orthogonal bases in X.sup.W, wherein a dimension of J.sup.W is 8M; after learning, transforming the best mapping matrix back from a whitening sample space to an original sample space; and then calculating a best mapping matrix of the original sample space according to J.sup.W and the whitening matrix, recording the best mapping matrix of the original sample space as J, J=J.sup.WW, wherein a dimension of J is 8192, W represents the whitening matrix, a dimension of W is M192; in this present invention, J is regarded as a model perceived via a brain by a way of manifold and is adopted for extracting manifold features of image blocks;

[0045] (4) regarding I.sub.org as an original undistorted natural scene image, regarding I.sub.dis as a distorted image of I.sub.org, regarding I.sub.dis as a distorted image to be evaluated; and then respectively dividing I.sub.org and I.sub.dis into non-overlapping image blocks, each of which having a size of 8 8, recording a j.sup.th image block in I.sub.org as x.sub.j.sup.ref , recording a j.sup.th image block in I.sub.dis as x.sub.j.sup.dis, wherein 1jN, N represents an amount of the image blocks in I.sub.org, and also represents an amount of the image blocks in I.sub.dis; and then arranging color values of R, G and B channels of all pixel points of every image block in I.sub.org for forming a color vector, recording the color vector formed by the color values of the R, G and B channels of all pixel points in x.sub.j.sup.ref as x.sub.j.sup.ref,col, arranging color values of R, G and B channels of all pixel points of every image block in I.sub.dis for forming a color vector, recording the color vector formed by the color values of the R, G and B channels of all pixel points in x.sub.j.sup.dis as x.sub.j.sup.dis,col, wherein a dimension of x.sub.j.sup.ref,col and x.sub.j.sup.dis,col is 1921, values from a 1.sup.st element to a 64.sup.th element in x.sub.j.sup.ref,col are respectively corresponding to color values of the R channel of every pixel point in x.sub.j.sup.ref obtained by a way of progressive scanning, values from a 65.sup.th element to a 128.sup.th element in x.sub.j.sup.ref,col are respectively corresponding to color values of the G channel of every pixel point in x.sub.j.sup.ref obtained by a way of progressive scanning, values from a 129.sup.th element to a 192.sup.nd element in x.sub.j.sup.ref,col are respectively corresponding to color values of the B channel of every pixel point in x.sub.j.sup.ref obtained by a way of progressive scanning; values from a 1.sup.st element to a 64.sup.th element in x.sub.j.sup.dis,col are respectively corresponding to color values of the R channel of every pixel point in x.sub.j.sup.dis obtained by a way of progressive scanning, values from a 65.sup.th element to a 128.sup.th element in x.sub.j.sup.dis,col are respectively corresponding to color values of the G channel of every pixel point in x.sub.j.sup.dis obtained by a way of progressive scanning, values from a 129.sup.th element to a 192.sup.nd element in x.sub.j.sup.dis,col are respectively corresponding to color values of the B channel of every pixel point in x.sub.j.sup.dis obtained by a way of progressive scanning; and then subtracting an average value of the values of all elements in a corresponding color vector from a value of every element in the corresponding color vector of every image block in I.sub.g, so as to centralizedly treat the corresponding color vector of every image block in I.sub.org, recording the centralizedly treated color vector in x.sub.j.sup.ref,col as {circumflex over (x)}.sub.j.sup.ref,col, subtracting an average value of the values of all elements in a corresponding color vector from a value of every element in the corresponding color vector of every image block in I.sub.dis, so as to centralizedly treat the corresponding color vector of every image block in I.sub.dis, recording the centralizedly treated color vector in x.sub.j.sup.dis,col as {circumflex over (x)}.sub.j.sup.dis,col; and finally recording a matrix formed by all centralizedly treated color vectors in I.sub.org as X.sup.ref , here X.sup.ref=[{circumflex over (x)}.sub.1.sup.ref,col, {circumflex over (x)}.sub.2.sup.ref,col, . . . , {circumflex over (x)}.sub.N.sup.ref,col], recording a matrix formed by all centralizedly treated color vectors in I.sub.dis as X.sup.dis, here X.sup.dis=[{circumflex over (x)}.sub.1.sup.dis,col, {circumflex over (x)}.sub.2.sup.dis,col, . . . , {circumflex over (x)}.sub.N.sup.dis,col], wherein a dimension of X.sup.ref and X.sup.dis is 192N, {circumflex over (x)}.sub.1.sup.ref,col, {circumflex over (x)}.sub.2.sup.ref,col, . . . , {circumflex over (x)}.sub.N.sup.ref,col respectively represent a centralizedly treated color vector of color values of R, G and B channels of all pixel points of a 1.sup.st image block in I.sub.org, a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a 2.sup.nd image block in I.sub.org, . . . , and a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a (N).sup.th image block in I.sub.org; {circumflex over (x)}.sub.1.sup.dis,col, {circumflex over (x)}.sub.2.sup.dis,col, . . . , {circumflex over (x)}.sub.N.sup.dis,col respectively represent a centralizedly treated color vector of color values of R, G and B channels of all pixel points of a 1.sup.st image block in I.sub.dis, a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a 2.sup.nd image block in I.sub.dis, . . . , and a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a (N).sup.th image block in I.sub.dis; and a symbol [ ] is a vector representation symbol;

[0046] (5) obtaining a block after the value of every element in the color vector corresponding to every image block minus the average value; due to the block contains contrast and structure information, regarding the block as a structural block; calculating structural differences between every column vector in X.sup.ref and a corresponding column vector in X.sup.dis by an absolute variance error (AVE), recording the structural differences between {circumflex over (x)}.sub.j.sup.ref,col and {circumflex over (x)}.sub.j.sup.dis,col as AVE({circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col), here,

[00013] AVE ( x ^ j ref , col , x ^ j dis , col ) = .Math. .Math. g = 1 192 .Math. ( x ^ j ref , col ( g ) ) 2 - .Math. g = 1 192 .Math. ( x ^ j dis , col ( g ) ) 2 .Math. ,

wherein a symbol | | is an absolute value symbol, {circumflex over (x)}.sub.j.sup.ref,col(g) represents a value of a g.sup.th element in {circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col(g) represents a value of a g.sup.th element in {circumflex over (x)}.sub.j.sup.dis,col;

[0047] and then arranging the obtained N structural differences in sequence for forming a vector with a dimension of 1N, recording the vector as v, wherein a value of a j.sup.th element is v.sub.j, v.sub.j=AVE ({circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col);

[0048] and then obtaining a roughing selection undistorted image block set and a roughing selection distorted image block set, which specifically comprises steps of: (A) designing an image block roughing selection threshold TH.sub.1, here, TH.sub.1=median(v), wheriein median( ) is a median selection function, median(v) represents selecting a mid-value of values of all elements in v; (B) extracting elements whose values are larger than or equal to TH.sub.1 from v; and (C) taking a set formed by image blocks corresponding to the extracted elements in I.sub.org as the roughing selection undistorted image block set, recording the roughing selection undistorted image block set as Y.sup.ref, here, Y.sup.ref={.sub.j.sup.ref|AVE({circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col)TH.sub.1, 1jN}; and taking a set formed by image blocks corresponding to the extracted elements in I.sub.dis as the roughing selection distorted image block set, recording the roughing selection distorted image block set as Y.sup.dis, here, Y.sup.dis={x.sub.j.sup.dis|AVE({circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col)TH.sub.1, 1jN};

[0049] wherein for using structural differences to select blocks, only areas with large structural differences are considered, these areas are generally corresponding to areas with low quality in distorted image but not necessary areas about which people are concerned most, and therefore fine selection is needed, namely, a fine selection undistorted image block set and a fine selection distorted image block set are obtained again, which specifically comprises steps of: (a) respectively calculating saliency maps of I.sub.org and I.sub.dis using saliency detection based-on simple priors (SDSP) and recording as f.sup.ref and f.sup.dis; (b) respectively dividing f.sup.ref and f.sup.dis into non-overlapping image blocks, each of which having a size of 88; (c) calculating an average value of pixel values of all pixel points of every image block in f.sup.ref, recording an average value of pixel values of all pixel points of a j.sup.th image block in f.sup.ref as vs.sub.j.sup.ref; calculating an average value of pixel values of all pixel points of every image block in f .sup.dis recording an average value of pixel values of all pixel points of a j.sup.th image block in f .sup.dis as vs.sub.j.sup.dis, wherein 1jN; (d) obtaining a maximum value between the average value of pixel values of all pixel points of every image block in f.sup.ref and the average value of pixel values of all pixel points of every image block in f.sup.dis, recording a maximum value between vs.sub.j.sup.ref and vs.sub.j.sup.dis as vs.sub.j,max, here, vs.sub.j,max=max(vs.sub.j.sup.ref, vs.sub.j.sup.dis), wherein max( ) is a maximum value function, the average value of pixel values of all pixel points of every image block is able to represent a visual importance of the image block, an image block with higher average value in f.sup.ref and f.sup.dis has a larger effect while evaluating a similarity of a saliency map where the image block is located; and (e) finely selecting partial images from the roughing selection undistorted image block set as fine selection undistorted image blocks for forming a fine selection undistorted image block set, recording the fine selection undistorted image block set as Y.sup.%ref, here, Y.sup.%ref={x.sub.j.sup.ref|AVE({circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col)TH.sub.1 and vs.sub.j,maxTH.sub.2, 1jN}; finely selecting partial images from the roughing selection distorted image block set as fine selection distorted image blocks for forming a fine selection distorted image block set, recording the fine selection distorted image block set as Y.sup.%dis, here, Y.sup.%dis={x.sub.j.sup.dis|AVE({circumflex over (x)}.sub.j.sup.ref,col, {circumflex over (x)}.sub.j.sup.dis,col), TH.sub.1 and vs.sub.j,maxTH.sub.2, 1jN}, wherein TH.sub.2 is a designed image block fine selection threshold, a value of TH.sub.2 is a maximum value at a former 60% position after arranging all maximum values obtained in step (d) from big to small;

[0050] (6) calculating manifold feature vectors of every image block in the fine selection undistorted image block set, recording a t.sup.th manifold feature vector in the fine selection undistorted image block set as r.sub.t, here, r.sub.t=J{circumflex over (x)}.sub.t.sup.ref,col; calculating manifold feature vectors of every image block in the fine selection distorted image block set, recording a t.sup.th manifold feature vector in the fine selection distorted image block set as d.sub.t, here, d.sub.t=J{circumflex over (x)}.sub.t.sup.dis,col, wherein 1tK , K represents an amount of image blocks in the fine selection undistorted image block set and also represents an amount of image blocks in the fine selection distorted image block set, a dimension of r.sub.t and d.sub.t is 81, {circumflex over (x)}.sub.t.sup.ref, col represents a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a t.sup.th image block of the fine selection undistorted image block set, and {circumflex over (x)}.sub.t.sup.dis,col represents a centralizedly treated color vector of color values of R, G and B channels of all pixel points in a t.sup.th image block of the fine selection distorted image block set;

[0051] and then defining manifold feature vectors of all image blocks in the fine selection undistorted image block set as a matrix, recording the matrix as R; defining manifold feature vectors of all image blocks in the fine selection distorted image block set as a matrix, recording the matrix as D, wherein a dimension of R and D is 8K , a t.sup.th column vector in R is r.sub.t , a t.sup.th column vector in D is d.sub.t;

[0052] and then calculating manifold feature similarities of I.sub.org and I.sub.dis recording the manifold feature similarities as MFS.sub.1, here,

[00014] MFS 1 = 1 8 K .Math. .Math. m = 1 8 .Math. .Math. t = 1 K .Math. 2 .Math. R m , t .Math. D m , t + C 1 ( R m , t ) 2 + ( D m , t ) 2 + C 1 ,

wherein R.sub.m,t represents a value of M.sup.th row and t.sup.th column in R, D.sub.m,t represents a value of M.sup.th row and t.sup.th column in D, C.sub.1 is a very small constant for ensuring a result stability, in this embodiment, C.sub.1=0.09;

[0053] (7) calculating brightness similarities of I.sub.org, and I.sub.dis, recording the brightness similarities as MFS.sub.2, here,

[00015] MFS 2 = .Math. t = 1 K .Math. ( t ref - _ ref ) ( t dis - _ dis ) + C 2 .Math. t = 1 K .Math. ( t ref - _ ref ) 2 .Math. t = 1 K .Math. ( t dis - _ dis ) 2 + C 2 .Math. ,

wherein .sub.t.sup.ref represents an average value of brightness values of all pixel points in a t.sup.th image block in the fine selection undistorted image block set,

[00016] _ ref = .Math. t = 1 K .Math. t ref K ;

.sub.t.sup.dis represents an average value of brightness values of all pixel points in a t.sup.th image block in the fine selection distorted image block set,

[00017] _ dis = .Math. t = 1 K .Math. t dis K ,

C.sub.2 is a very small constant, in this embodiment, C.sub.2=0.001; and

[0054] (8) linearly weighting MFS.sub.1 and MFS.sub.2 for obtaining mass fractions of I.sub.dis, recording the mass fractions as MFS, here, MFS=MFS.sub.2+(1)MFS.sub.1, wherein is adapted for adjusting a relative importance of MFS.sub.1 and MFS.sub.2, 0<<1, in this embodiment, =0.8.

[0055] To further show the feasibility and effectiveness of the present invention, experiments are done aiming at the method disclosed by the present invention.

[0056] Experiment 1: Verify Performance Indexes of the Method Disclosed by the Present Invention

[0057] To verify the effectiveness of manifold feature similarity (MFS), the method disclosed by the present invention is tested on four public test image libraries, and evaluation results are simultaneously compared with each other. The four public test image libraries for testing are respectively LIVE test image library, CSIQ test image library, TID2008 test image library and TID2013 test image library. Every test image library contains thousands of distorted images, and simultaneously owns a variety of distortion types. A subjective score, such as a mean opinion score (MOS) or a differential mean opinion score (DMOS), is given to every distorted image. Table 1 shows an amount of reference images, an amount of distorted images, and an amount of distortion types of every test image library, and an amount of people involved in subjective experiments. During experiments, only distorted images are evaluated and original images are removed. Final performance verification of the present invention is made based on the comparison between subjective scores and objective evaluation results.

TABLE-US-00001 TABLE 1 Four test image libraries applied to image quality evaluation method Amount of Amount of Amount of Amount of people involved Test image reference distorted distortion in subjective library images images types experiments TID2013 25 3000 25 971 TID2008 25 1700 17 838 CSIQ 30 866 6 35 LIVE 29 779 5 161

[0058] According to the standard verification method provided by video quality evaluation expert group Phasel/II(VQEG), four general evaluation indexes are adopted to obtain evaluation performances of image quality evaluation methods. Spearman rand-order correlation coefficient (SROCC) and Kendall rank-order correlation coefficient (KROCC) are adapted for evaluating pros and cons of prediction montonicity of image quality evaluation methods. These two indexes are made on sorted data and relative distances between data points. To obtain other two indexes, namely Pearson linear correlation coefficient (PLCC) and Root mean squared error (RMSE), it is needed for objective evaluation values and mean opinion scores (MOS) to making the nonlinear mapping, so as to remove nonlinear effects of objective scores. Five-parameter nonlinear mapping function

[00018] Q ( q ) = 1 ( 1 2 - 1 1 + exp ( 2 ( q - 3 ) ) ) + 4 .Math. q + 5

is adopted to making the nonlinear fitting, wherein q represents an original objective quality evaluation score, Q represents a nonlinearly mapped score, five adjusting parameters .sub.1, .sub.2, .sub.3, .sub.4 and .sub.5 are determined by variance sums between objective scores after minimum mapping and opinion scores, exp( ) is an exponential function taking a natural base e as a base. PLCC, SROCC and KROCC are higher, RMSE are lower, which is able to show that the correlation between mean opinion scores and evaluation results of the method disclosed by the present invention is better.

[0059] In the method disclosed by the present invention, representative ten image quality evaluation methods, which are respectively SSIM, MS-SSIM, IFC, VIF, VSNR, MAD, GSM, RFSIM, FSIMc and VSI, are compared with each other.

[0060] In this embodiment, 10 undistorted images in the TOY image data library are adopted, 20000 image blocks are randomly selected for training to obtain the best mapping matrix J, and then the best mapping matrix J is adopted for subsequent image quality evaluation. Table 2 shows four prediction performance indexes, which are respectively SROCC, KROCC, PLCC and RMSE, on four test image libraries of every image quality evaluation method. In Table 2, indexes of two image quality evaluation methods with the best index performance in all image quality evaluation methods are labeled as blackbody. It can be seen from Table 2 that performances of the method disclosed by the present invention on all test image libraries are good. Firstly, on the CSIQ test image library, the performance is the best and better than other all image quality evaluation methods. Secondly, compared with other all image quality evaluation methods, the method disclosed by the present invention has better performance on the two largest image libraries TID2008 and TID2013 than other algorithm and has approximate performance with the VSI algorithm. In spite that the performance of the present invention on the LIVE test image library is not the best, the difference between the performance of the present invention and the evaluation performance of the best image quality evaluation method is slight. In contrast, the existing image quality evaluation method may have good effects on some test image libraries, but have passable effects on other test image libraries. For example, the VIF algorithm and the MAD algorithm have better evaluation effects on the LIVE test image library, but bad evaluation effects on the TID2008 test image library and the TID2013 test image library. Therefore, as a whole, compared with existing image quality evaluation methods, quality prediction results are more close to subjective elevations of the method disclosed by the present invention.

[0061] To more comprehensively evaluate the capacity of every image quality evaluation method predicting image quality reduction caused by special distortions, evaluation performances of the method disclosed by the present invention and existing image quality evaluation methods under special distortions are tested. SROCC is adopted for conditions with fewer data points and is not affected by nonlinear mapping, so SROCC is selected as the performance index. Of course, other performance indexes such as KROCC, PLCC and RMSE are able to draw similar conclusions. In Table 3, three image quality evaluation methods with three former SROCC values in every distortion type of every test image library are labeled as blackbody. It can be seen from Table 3 that there are 31 times for the VSI algorithm to be located at the former three, there are 25 times for the method disclosed by the present invention to be located at the former three, and then followed by the FSIMc algorithm and the GSM algorithm. Therefore, the conclusion is able to be drawn that under special distortion types, the VSI algorithm is the best, and then followed by the method disclosed by the present invention, the FSIMc algorithm and the GSM algorithm in sequence. The most important is that the VSI algorithm, the MFS algorithm, the FSIMc algorithm and the GSM algorithm are better than other methods. Furthermore, on the two largest test image libraries TID2008 and TID2013, the method disclosed by the present invention has better evaluation performances for AGN, SCN, MN, HFN, IN, JP2K and J2TE distortions than existing image quality evaluation methods, and has best evaluation performances for AGWN and GB distortions on the LIVE and CSIQ test image libraries.

TABLE-US-00002 TABLE 2 Overall performance contrasts of 11 image quality evaluation methods on 4 test image libraries Test image MS- library SSIM SSIM IFC VIF VSNR MAD GSM RFSM FSIMc VSI MFS TID SROC 0.7471 0.7859 0.5389 0.6769 0.6812 0.7808 0.7946 0.7744 0.8510 0.8965 0.8741 2013 KROC 0.5588 0.6407 0.3939 0.5147 0.5084 0.6035 0.6255 0.5951 0.6665 0.7183 0.6862 PLCC 0.7895 0.8329 0.5538 0.7720 0.7402 0.8267 0.8464 0.8333 0.8769 0.9000 0.8856 RMSE 0.7608 0.6861 1.0322 0.7880 0.8392 0.6975 0.6603 0.6852 0.5959 0.5404 0.5757 TID SROC 0.7749 0.8542 0.5675 0.7491 0.7046 0.8340 0.8504 0.8680 0.8840 0.8979 0.8893 2008 KROC 0.5768 0.6568 0.4236 0.5860 0.5340 0.6445 0.6596 0.6780 0.6991 0.7123 0.7055 PLCC 0.7732 0.8451 0.7340 0.8084 0.6820 0.8308 0.8422 0.8645 0.8762 0.8762 0.8865 RMSE 0.8511 0.7173 0.9113 0.7899 0.9815 0.7468 0.7235 0.6746 0.6468 0.6466 0.6211 CSIQ SROC 0.8756 0.9133 0.7671 0.9195 0.8106 0.9466 0.9108 0.9295 0.9310 0.9423 0.9615 KROC 0.6907 0.7393 0.5897 0.7537 0.6247 0.7970 0.7374 0.7645 0.7690 0.7857 0.8260 PLCC 0.8613 0.8991 0.8384 0.9277 0.8002 0.9502 0.8964 0.9179 0.9192 0.9279 0.9614 RMSE 0.1344 0.1149 0.1431 0.0980 0.1575 0.0818 0.1164 0.1042 0.1034 0.0979 0.0722 LIVE SROC 0.9479 0.9513 0.9259 0.9636 0.9274 0.9669 0.9561 0.9401 0.9645 0.9524 0.9578 KROC 0.7963 0.8045 0.7579 0.8282 0.7616 0.8421 0.8150 0.7816 0.8363 0.8058 0.8199 PLCC 0.9449 0.9489 0.9268 0.9604 0.9231 0.9675 0.9512 0.9354 0.9613 0.9482 0.9543 RMSE 8.9455 8.6188 10.264 7.6137 10.506 6.9073 8.4327 9.6642 7.5296 8.6816 8.1691

TABLE-US-00003 TABLE 3 SROCC evaluation values of 11 image quality evaluation methods on special distortions Distortion MS- type SSIM SSIM IFC VIF VSNR MAD GSM RFSM FSIMc VSI MFS TID AGN 0.8671 0.8646 0.6612 0.8994 0.8271 0.8843 0.9064 0.8878 0.9101 0.9460 0.9053 2013 ANC 0.7726 0.7730 0.5352 0.8299 0.7305 0.8019 0.8175 0.8476 0.8537 0.8705 0.8273 SCN 0.8515 0.8544 0.6601 0.8835 0.8013 0.8911 0.9158 0.8825 0.8900 0.9367 0.9001 MN 0.7767 0.8073 0.6932 0.8450 0.7072 0.7380 0.7293 0.8368 0.8094 0.7697 0.8186 HFN 0.8634 0.8604 0.7406 0.8972 0.8455 0.8876 0.8869 0.9145 0.9040 0.9200 0.9063 IN 0.7503 0.7629 0.6408 0.8537 0.7363 0.2769 0.7965 0.9062 0.8251 0.8741 0.8313 QN 0.8657 0.8706 0.6282 0.7854 0.8357 0.8514 0.8841 0.8968 0.8807 0.8748 0.8421 GB 0.9668 0.9673 0.8907 0.9650 0.9470 0.9319 0.9689 0.9698 0.9551 0.9612 0.9553 DEN 0.9254 0.9268 0.7779 0.8911 0.9081 0.9252 0.9432 0.9359 0.9330 0.9484 0.9178 JPEG 0.9200 0.9265 0.8357 0.9192 0.9008 0.9217 0.9284 0.9398 0.9339 0.9541 0.9377 JP2K 0.9468 0.9504 0.9078 0.9516 0.9273 0.9511 0.9602 0.9518 0.9589 0.9706 0.9633 JGTE 0.8493 0.8475 0.7425 0.8409 0.7908 0.8283 0.8512 0.8312 0.8610 0.9216 0.8885 J2TE 0.8828 0.8889 0.7769 0.8761 0.8407 0.8788 0.9182 0.9061 0.8919 0.9228 0.9081 NEPN 0.7821 0.7968 0.5737 0.7720 0.6653 0.8315 0.8130 0.7705 0.7937 0.8060 0.7727 Block 0.5720 0.4801 0.2414 0.5306 0.1771 0.2812 0.6418 0.0339 0.5532 0.1713 0.1755 MS 0.7752 0.7906 0.5522 0.6276 0.4871 0.6450 0.7875 0.5547 0.7487 0.7700 0.6285 CTC 0.3775 0.4634 0.1798 0.8386 0.3320 0.1972 0.4857 0.3989 0.4679 0.4754 0.4598 CCS 0.4141 0.4099 0.4029 0.3099 0.3677 0.0575 0.3578 0.0204 0.8359 0.8100 0.8102 MGN 0.7803 0.7786 0.6143 0.8468 0.7644 0.8409 0.8348 0.8464 0.8569 0.9117 0.8630 CN 0.8566 0.8528 0.8160 0.8946 0.8683 0.9064 0.9124 0.8917 0.9135 0.9243 0.9052 LCNI 0.9057 0.9068 0.8180 0.9204 0.8821 0.9443 0.9563 0.9010 0.9485 0.9564 0.9290 ICQD 0.8542 0.8555 0.6006 0.8414 0.8667 0.8745 0.8973 0.8959 0.8815 0.8839 0.9072 CHA 0.8775 0.8784 0.8210 0.8848 0.8645 0.8310 0.8823 0.8990 0.8925 0.8906 0.8798 SSR 0.9461 0.9483 0.8885 0.9353 0.9339 0.9567 0.9668 0.9326 0.9576 0.9628 0.9478 TID AGN 0.8107 0.8086 0.5806 0.8797 0.7728 0.8386 0.8606 0.8415 0.8758 0.9229 0.8887 2008 ANC 0.8029 0.8054 0.5460 0.8757 0.7793 0.8255 0.8091 0.8613 0.8931 0.9118 0.8789 SCN 0.8144 0.8209 0.5958 0.8698 0.7665 0.8678 0.8941 0.8468 0.8711 0.9296 0.8951 MN 0.7795 0.8107 0.6732 0.8683 0.7295 0.7336 0.7452 0.8534 0.8264 0.7734 0.8375 HFN 0.8729 0.8694 0.7318 0.9075 0.8811 0.8864 0.8945 0.9182 0.9156 0.9253 0.9225 IN 0.6732 0.6907 0.5345 0.8327 0.6471 0.0650 0.7235 0.8806 0.7719 0.8298 0.7919 QN 0.8531 0.8589 0.5857 0.7970 0.8270 0.8160 0.8800 0.8880 0.8726 0.8731 0.8500 GB 0.9544 0.9563 0.8559 0.9540 0.9330 0.9196 0.9600 0.9409 0.9472 0.9529 0.9501 DEN 0.9530 0.9582 0.7973 0.9161 0.9286 0.9433 0.9725 0.9400 0.9618 0.9693 0.9488 JPEG 0.9252 0.9322 0.8180 0.9168 0.9174 0.9275 0.9393 0.9385 0.9294 0.9616 0.9416 JP2K 0.9625 0.9700 0.9437 0.9709 0.9515 0.9707 0.9758 0.9488 0.9780 0.9848 0.9825 JGTE 0.8678 0.8681 0.7909 0.8585 0.8055 0.8661 0.8790 0.8503 0.8756 0.9160 0.8706 J2TE 0.8577 0.8606 0.7301 0.8501 0.7909 0.8394 0.8936 0.8592 0.8555 0.8942 0.8947 NEPN 0.7107 0.7377 0.8418 0.7619 0.5716 0.8287 0.7386 0.7274 0.7514 0.7699 0.7094 Block 0.8462 0.7546 0.6770 0.8324 0.1926 0.7970 0.8862 0.6258 0.8464 0.6295 0.4698 MS 0.7231 0.7336 0.4250 0.5096 0.3715 0.5163 0.7190 0.4178 0.6554 0.6714 0.4810 CTC 0.5246 0.6381 0.1713 0.8188 0.4239 0.2723 0.6691 0.5823 0.6510 0.6557 0.6348 CSIQ AGWN 0.8974 0.9471 0.8431 0.9575 0.9241 0.9541 0.9440 0.9441 0.9359 0.9636 0.9647 JPEG 0.9546 0.9634 0.9412 0.9705 0.9036 0.9615 0.9632 0.9502 0.9664 0.9618 0.9548 JP2K 0.9606 0.9683 0.9252 0.9672 0.9480 0.9752 0.9648 0.9643 0.9704 0.9694 0.9750 AGPN 0.8922 0.9331 0.8261 0.9511 0.9084 0.9570 0.9387 0.9357 0.9370 0.9638 0.9607 GB 0.9609 0.9711 0.9527 0.9745 0.9446 0.9602 0.9589 0.9634 0.9729 0.9679 0.9758 GCD 0.7922 0.9526 0.4873 0.9345 0.8700 0.9207 0.9354 0.9527 0.9438 0.9504 0.9485 LIVE JP2K 0.9614 0.9627 0.9113 0.9696 0.9551 0.9676 0.9700 0.9323 0.9724 0.9604 0.9645 JPEG 0.9764 0.9815 0.9468 0.9846 0.9657 0.9764 0.9778 0.9584 0.9840 0.9761 0.9759 AGWN 0.9694 0.9733 0.9382 0.9858 0.9785 0.9844 0.9774 0.9799 0.9716 0.9835 0.9868 GB 0.9517 0.9542 0.9584 0.9728 0.9413 0.9465 0.9518 0.9066 0.9708 0.9527 0.9622 FF 0.9556 0.9471 0.9629 0.9650 0.9027 0.9569 0.9402 0.9237 0.9519 0.9430 0.9418

[0062] Experiment 2: Verify Time Complexity of the Method Disclosed by the Present Invention

[0063] Table 4 shows operation times while 11 image quality evaluation methods process a pair of 384512 (selected from TID 2013 image library) color images. The experiment is done on LENOVO desktop computer, wherein a processor is Intel(R) core i5-4590, CPU is 3.3 GHz, a memory is 8G, a software platform is Matlab R2014b. It can be seen from Table 4 that the method disclosed by the present invention has a compromised time complexity, and especially, the method disclosed by the present invention has faster running speed than IFC algorithm, VIF algorithm, MAD algorithm and FSIMc algorithm, and obtains approximate or even better evaluation effects.

TABLE-US-00004 TABLE 4 Time complexities of 11 image quality evaluation methods Image quality evaluation algorithm Time complexity (ms) SSIM 17.3 MS-SSIM 71.2 IFC 538.0 VIF 546.4 VSNR 23.9 MAD 702.3 GSM 17.7 RFSM 49.8 FSIMc 142.5 VSI 105.2 MFS 140.7

[0064] One skilled in the art will understand that the embodiment of the present invention as shown in the drawings and described above is exemplary only and not intended to be limiting.

[0065] It will thus be seen that the objects of the present invention have been fully and effectively accomplished. Its embodiments have been shown and described for the purposes of illustrating the functional and structural principles of the present invention and is subject to change without departure from such principles. Therefore, this invention includes all modifications encompassed within the spirit and scope of the following claims.