Objective assessment method for color image quality based on online manifold learning

20170286798 · 2017-10-05

Assignee

Inventors

Cpc classification

International classification

Abstract

An objective assessment method for a color image quality based on online manifold learning considers a relationship between a saliency and an image quality objective assessment. Through a visual saliency detection algorithm, saliency maps of a reference image and a distorted image are obtained for further obtaining a maximum fusion saliency map. Based on maximum saliencies of image blocks in the maximum fusion saliency map, a saliency difference between each reference image block and a corresponding distorted image block is measured through an absolute difference, and thus reference visual important image blocks and distorted visual important image blocks are screened and extracted. Through manifold eigenvectors of the reference visual important image blocks and the distorted visual important image blocks, an objective quality assessment value of the distorted image is calculated. The method has an increased assessment effect and a higher correlation between an objective assessment result and a subjective perception.

Claims

1. An objective assessment method for a color image quality based on online manifold learning, comprising steps of: {circle around (1)} representing an undistorted reference image having a width W and a height H by I.sup.R; and representing a distorted image to be assessed, which is corresponding to the I.sup.R, by I.sup.D; {circle around (2)} through a visual saliency detection algorithm, respectively obtaining saliency maps of the I.sup.R and the I.sup.D, correspondingly denoted as M.sup.R and M.sup.D; then, according to the M.sup.R and the M.sup.D, calculating a maximum fusion saliency map, denoted as M.sup.F; and denoting a pixel value of a pixel having coordinates of (x, y) in the M.sup.F as M.sup.F(x, y), M.sup.F(x, y)=max (M.sup.R(x, y), M.sup.D(x, y)), wherein: 1≦x≦W, 1≦y≦H; the max( ) is a function to find a maximum; the M.sup.R(x, y) represents a pixel value of a pixel having coordinates of (x, y) in the M.sup.R; and the M.sup.D(x, y) represents a pixel value of a pixel having coordinates of (x, y) in the M.sup.D; {circle around (3)} respectively dividing the I.sup.R, the I.sup.D, the M.sup.R, the M.sup.D, and the M.sup.F into W × H 8 × 8 image blocks, which are not overlapping mutually and have a size of 8×8; vectorizing color values of R, G, and B channels of all pixels in each image block of the I.sup.R and the I.sup.D; denoting a color vector obtained through vectorizing the color values of the R, the G, and the B channels of all the pixels in a j.sup.th image block of the I.sup.R as X.sub.j.sup.R; denoting a color vector obtained through vectorizing the color values of the R, the G, and the B channels of all the pixels in a j.sup.th image block of the I.sup.D as X.sub.j.sup.D; wherein: the j has an initial value of 1, 1 j W × H 8 × 8 ; both of the X.sub.j.sup.R and the X.sub.j.sup.D have a dimensionality of 192×1; values of a 1.sup.st element to a 64.sup.th element in the X.sub.j.sup.R respectively correspond to the color value of the R channel of each pixel in the j.sup.th image block of the I.sup.R in a line-by-line scanning manner; values of a 65.sup.th element to a 128.sup.th element in the X.sub.j.sup.R respectively correspond to the color value of the G channel of each pixel in the j.sup.th image block of the I.sup.R in the line-by-line scanning manner; values of a 129.sup.th element to a 192.sup.nd element in the X.sub.j.sup.R respectively correspond to the color value of the B channel of each pixel in the j.sup.th image block of the I.sup.R in the line-by-line scanning manner; values of a 1.sup.st element to a 64.sup.th element in the X.sub.j.sup.D respectively correspond to the color value of the R channel of each pixel in the j.sup.th image block of the I.sup.D in the line-by-line scanning manner; values of a 65.sup.th element to a 128.sup.th element in the X.sub.j.sup.D respectively correspond to the color value of the G channel of each pixel in the j.sup.th image block of the I.sup.D in the line-by-line scanning manner; and values of a 129.sup.th element to a 192.sup.nd element in the X.sub.j.sup.D respectively correspond to the color value of the B channel of each pixel in the j.sup.th image block of the I.sup.D in the line-by-line scanning manner; and vectorizing pixel values of all pixels in each image block of the M.sup.R, the M.sup.D, and the M.sup.F; denoting a pixel value vector obtained through vectorizing the pixel values of all the pixels in a j.sup.th image block of the M.sup.R as S.sub.j.sup.R; denoting a pixel value vector obtained through vectorizing the pixel values of all the pixels in a j.sup.th image block of the M.sup.D as S.sub.j.sup.D; and denoting a pixel value vector obtained through vectorizing the pixel values of all the pixels in a j.sup.th image block of the M.sup.F as S.sub.j.sup.F; wherein: the S.sub.j.sup.R, the S.sub.j.sup.D, and the S.sub.j.sup.F all have a dimensionality of 64×1; values of a 1.sup.st element to a 64.sup.th element in the S.sub.j.sup.R respectively correspond to the pixel value of each pixel in the j.sup.th image block of the M.sup.R in the line-by-line scanning manner; values of a 1.sup.st element to a 64.sup.th element in the S.sub.j.sup.D respectively correspond to the pixel value of each pixel in the j.sup.th image block of the M.sup.D in the line-by-line scanning manner; and values of a 1.sup.st element to a 64.sup.th element in the S.sub.j.sup.F respectively correspond to the pixel value of each pixel in the j.sup.th image block of the M.sup.F in the line-by-line scanning manner; {circle around (4)} calculating a saliency of each image block in the M.sup.F; and denoting the saliency of the j.sup.th image block in the M.sup.F as d.sub.j, d j = .Math. i = 1 64 .Math. S j F ( i ) , wherein: 1≦i≦64 and the S.sub.j.sup.F(i) represents a value of an i.sup.th element in the S.sub.j.sup.F; orderly arranging the saliencies of all the image blocks in the M.sup.F, from the biggest to the smallest; and, after arranging, determining sequence numbers of the image blocks corresponding to former t.sub.1 saliencies, wherein: t 1 = λ 1 × W × H 8 × 8 ; the λ.sub.1 represents an image block selection proportionality coefficient; and λ.sub.1ε(0,1]; and finding the image blocks in the I.sup.R, which are corresponding to the determined t.sub.1 sequence numbers, and defining as reference image blocks; finding the image blocks in the I.sup.D, which are corresponding to the determined t.sub.1 sequence numbers, and defining as distorted image blocks; finding the image blocks in the M.sup.R, which are corresponding to the determined t.sub.1 sequence numbers, and defining as reference saliency image blocks; finding the image blocks in the M.sup.D, which are corresponding to the determined t.sub.1 sequence numbers, and defining as distorted saliency image blocks; {circle around (5)} measuring a saliency difference between each reference image block in the I.sup.R and a corresponding distorted image block in the I.sup.D through an absolute difference; and denoting a saliency difference between a t′.sup.th reference image block in the I.sup.R and a t′.sup.th distorted image block in the I.sup.D as e.sub.t′, e t = 1 64 .Math. .Math. i = 1 64 .Math. .Math. S ~ t R ( i ) - S ~ t D ( i ) .Math. , wherein: the t′ has an initial value of 1, 1≦t′≦t.sub.1; the symbol “∥” is an absolute value symbol; the {tilde over (S)}.sub.t′.sup.R(i) represents a value of an i.sup.th element in a pixel value vector {tilde over (S)}.sub.t′.sup.R corresponding to a t′.sup.th reference saliency image block in the M.sup.R; and the {tilde over (S)}.sub.t′.sup.D(i) represents a value of an i.sup.th element in a pixel value vector {tilde over (S)}.sub.t′.sup.D corresponding to a t′.sup.th distorted saliency image block in the M.sup.D; and orderly arranging the measured t.sub.1 saliency differences, from the biggest to the smallest; after arranging, determining the reference image blocks and the distorted image blocks corresponding to former t.sub.2 saliency differences; defining the determined t.sub.2 reference image blocks as reference visual important image blocks, and adopting a matrix formed by color vectors corresponding to all the reference visual important image blocks as a reference visual important image block matrix, denoted as Y.sup.R; defining the determined t.sub.2 distorted image blocks as distorted visual important image blocks, and adopting a matrix formed by color vectors corresponding to all the distorted visual important image blocks as a distorted visual important image block matrix, denoted as Y.sup.D; wherein: t.sub.2=λ.sub.2×t.sub.1, the λ.sub.2 represents a selection proportionality coefficient of the reference image blocks and the distorted image blocks, and λ.sub.2ε(0,1]; the Y.sup.R and the Y.sup.D have a dimensionality of 192×t.sub.2; a t″.sup.th column vector in the Y.sup.R is a color vector corresponding to a determined t″.sup.th reference visual important image block; a t″.sup.th column vector in the Y.sup.D is a color vector corresponding to a determined t″.sup.th distorted visual important image block; and the t″ has an initial value of 1, 1≦t″≦t.sub.2; {circle around (6)} centralizing the Y.sup.R through subtracting a mean value of values of all elements in each column vector from a value of each element in the same column vector of the Y.sup.R; and denoting an obtained centralized matrix as Y, wherein the Y has a dimensionality of 192×t.sub.2; and processing the Y with dimensionality reduction and whitening through a principal component analysis; and, denoting an obtained matrix after the dimensionality reduction and the whitening as Y.sup.w, Y.sup.w=W×Y; wherein: the Y.sup.w has a dimensionality of M×t.sub.2; the W represents a whitened matrix and has a dimensionality of M×192, 1<M<<192; and the symbol “<<” is a much-less-than symbol; {circle around (7)} online training the Y.sup.w through an orthogonal locality preserving projection algorithm; and obtaining a characteristic basis matrix of the Y.sup.w, denoted as D, wherein the D has a dimensionality of M×192; {circle around (8)} according to the Y.sup.R and the D, calculating a manifold eigenvector of each reference visual important image block; denoting the manifold eigenvector of a t″.sup.th reference visual important image block as u.sub.t″, u.sub.t″=D×y.sub.t″.sup.R, wherein: the u.sub.t″ has a dimensionality of M×1, and the y.sub.t″.sup.R is the t″.sup.th column vector in the Y.sup.R; according to the Y.sup.D and the D, calculating a manifold eigenvector of each distorted visual important image block; and denoting the manifold eigenvector of a t″.sup.th distorted visual important image block as v.sub.t″, v.sub.t″=D×y.sub.t″.sup.D, wherein: the v.sub.t″ has a dimensionality of M×1, and the y.sub.t″.sup.D is the t″.sup.th column vector in the Y.sup.D; and {circle around (9)} according to the manifold eigenvectors of all the reference visual important image blocks and the manifold eigenvectors of all the distorted visual important image blocks, calculating an objective quality assessment value of the I.sup.D, denoted as Score, Score = 1 t 2 × M .Math. .Math. t = 1 t 2 .Math. .Math. m = 1 M .Math. 2 × u t ( m ) × v t ( m ) + c ( u t ( m ) ) 2 + ( v t ( m ) ) 2 + c , wherein: 1≦m≦M; the u.sub.t″(m) represents a value of an m.sup.th element in the u.sub.t″; the v.sub.t″(m) represents a value of an m.sup.th element in the v.sub.t″; and the c is a small constant for guaranteeing a result stability.

2. The objective assessment method for the color image quality based on the online manifold learning, as recited in claim 1, wherein the Y.sup.w in the step {circle around (6)} is obtained through following steps of: {circle around (6)}_1, representing a covariance matrix of the Y by C, C = 1 t 2 .Math. ( Y × Y T ) , wherein: the C has a dimensionality of 192×192, and the Y.sup.T is a transposition of the Y; {circle around (6)}_2, processing the C with eigenvalue decomposition, and obtaining all maximum eigenvalues and corresponding eigenvectors, wherein the eigenvectors have a dimensionality of 192×1; {circle around (6)}_3, choosing M maximum eigenvalues and corresponding M eigenvectors; {circle around (6)}_4, according to the chosen M maximum eigenvalues and the corresponding M eigenvectors, calculating the whitened matrix W, W=Ψ.sup.−1/2×E.sup.T, wherein: the Ψ has a dimensionality of M×M, Ψ=diag(ψ.sub.1, . . . , ψ.sub.M), Ψ.sup.−1/2=diag(1/√{square root over (ψ.sub.1)}, . . . , 1/√{square root over (ψ.sub.M)}); the E has a dimensionality of 192×M, E=[e.sub.1, . . . , e.sub.M]; the diag( ) is a main-diagonal matrix representation; the ψ.sub.1, . . . , ψ.sub.M correspondingly represent a 1.sup.st to a M.sup.th chosen maximum eigenvalue; and, the e.sub.1, . . . , e.sub.M correspondingly represent a 1.sup.st to a M.sup.th chosen eigenvector; and {circle around (6)}_5, according to the W, processing the Y with the whitening, and obtaining the Y.sup.w after the dimensionality reduction and the whitening, Y.sub.w=W×Y.

3. The objective assessment method for the color image quality based on the online manifold learning, as recited in claim 1, wherein: in the step {circle around (4)}, λ.sub.1=0.7.

4. The objective assessment method for the color image quality based on the online manifold learning, as recited in claim 1, wherein: in the {circle around (5)}, λ.sub.2=0.6.

5. The objective assessment method for the color image quality based on the online manifold learning, as recited in claim 1, wherein: in the step {circle around (9)}, c=0.04.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

[0039] FIG. 1 is an implementation block diagram of an objective assessment method for a color image quality based on online manifold learning according to a preferred embodiment of the present invention.

[0040] FIG. 2a is a (scattered point)-(fitting curve) graph of the objective assessment method on a LIVE image database according to the preferred embodiment of the present invention.

[0041] FIG. 2b is a (scattered point)-(fitting curve) graph of the objective assessment method on a CSIQ image database according to the preferred embodiment of the present invention.

[0042] FIG. 2c is a (scattered point)-(fitting curve) graph of the objective assessment method on a TID2008 image database according to the preferred embodiment of the present invention.

DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENT

[0043] The present invention is further described with accompanying drawings and the preferred embodiment.

[0044] According to a preferred embodiment of the present invention, the present invention provides an objective assessment method for a color image quality based on online manifold learning, wherein an implementation block diagram thereof is showed in FIG. 1, and the method comprises steps of:

[0045] {circle around (1)} representing an undistorted reference image having a width W and a height H by I.sup.R; and representing a distorted image to be assessed, which is corresponding to the I.sup.R, by I.sup.D;

[0046] {circle around (2)} through a conventional visual saliency detection algorithm, named Saliency Detection based on Simple Priors (SDSP) herein, respectively obtaining saliency maps of the I.sup.R and the I.sup.D, correspondingly denoted as M.sup.R and M.sup.D; then, according to the M.sup.R and the M.sup.D, calculating a maximum fusion saliency map, denoted as M.sup.F; and denoting a pixel value of a pixel having coordinates of (x, y) in the M.sup.F as M.sup.F(x, y), M.sup.F(x, y)=max (M.sup.R(x, y), M.sup.D(x, y)), wherein: 1≦x≦W, 1≦y≦H; the max( ) is a function to find a maximum; the M.sup.R(x, y) represents a pixel value of a pixel having coordinates of (x, y) in the M.sup.R; and the M.sup.D(x, y) represents a pixel value of a pixel having coordinates of (x, y) in the M.sup.D;

[0047] {circle around (3)} respectively dividing the I.sup.R, the I.sup.D, the M.sup.R, the M.sup.D, and the M.sup.F into

[00008] W × H 8 × 8

image blocks, which are not overlapping mutually and have a size of 8×8; if a size of the I.sup.R, the I.sup.D, the M.sup.R, the M.sup.D, and the M.sup.F is indivisible by 8×8, redundant pixels are not processed;

[0048] vectorizing color values of R, G, and B channels of all pixels in each image block of the I.sup.R and the I.sup.D; denoting a color vector obtained through vectorizing the color values of the R, the G, and the B channels of all the pixels in a j.sup.th image block of the I.sup.R as X.sub.j.sup.R; denoting a color vector obtained through vectorizing the color values of the R, the G, and the B channels of all the pixels in a j.sup.th image block of the I.sup.D as X.sub.j.sup.D; wherein: the j has an initial value of 1,

[00009] 1 j W × H 8 × 8 ;

both of the X.sub.j.sup.R and the X.sub.j.sup.D have a dimensionality of 192×1; values of a 1.sup.st element to a 64.sup.th element in the X.sub.j.sup.R respectively correspond to the color value of the R channel of each pixel in the j.sup.th image block of the I.sup.R in a line-by-line scanning manner, namely the value of the 1.sup.st element in the X.sub.j.sup.R is the color value of the R channel of a pixel in a 1.sup.st row and a 1.sup.st column of the j.sup.th image block of the I.sup.R, the value of a 2.sup.nd element in the X.sub.j.sup.R is the color value of the R channel of a pixel in the 1.sup.st row and a 2.sup.nd column of the j.sup.th image block of the I.sup.R, and so on; values of a 65.sup.th element to a 128.sup.th element in the X.sub.j.sup.R respectively correspond to the color value of the G channel of each pixel in the j.sup.th image block of the I.sup.R in the line-by-line scanning manner, namely the value of the 65.sup.th element in the X.sub.j.sup.R is the color value of the G channel of a pixel in a 1.sup.st row and a 1.sup.st column of the j.sup.th image block of the I.sup.R, the value of a 66.sup.th element in the X.sub.j.sup.R is the color value of the G channel of a pixel in the 1.sup.st row and a 2.sup.nd column of the j.sup.th image block of the I.sup.R, and so on; values of a 129.sup.th element to a 192.sup.nd element in the X.sub.j.sup.R respectively correspond to the color value of the B channel of each pixel in the j.sup.th image block of the I.sup.R in the line-by-line scanning manner, namely the value of the 129.sup.th element in the X.sub.j.sup.R is the color value of the B channel of a pixel in a 1.sup.st row and a 1.sup.st column of the j.sup.th image block of the I.sup.R, the value of a 130.sup.th element in the X.sub.j.sup.R is the color value of the B channel of a pixel in the 1.sup.st row and a 2.sup.nd column of the j.sup.th image block of the I.sup.R, and so on; values of a 1.sup.st element to a 64.sup.th element in the X.sub.j.sup.D respectively correspond to the color value of the R channel of each pixel in the j.sup.th image block of the I.sup.D in the line-by-line scanning manner, namely the value of the 1.sup.st element in the X.sub.j.sup.D is the color value of the R channel of a pixel in a 1.sup.st row and a 1.sup.st column of the j.sup.th image block of the I.sup.D, the value of a 2.sup.nd element in the X.sub.j.sup.D is the color value of the R channel of a pixel in the 1.sup.st row and a 2.sup.nd column of the j.sup.th image block of the I.sup.D, and so on; values of a 65.sup.th element to a 128.sup.th element in the X.sub.j.sup.D respectively correspond to the color value of the G channel of each pixel in the j.sup.th image block of the I.sup.D in the line-by-line scanning manner, namely the value of the 65.sup.th element in the X.sub.j.sup.D is the color value of the G channel of a pixel in a 1.sup.st row and a 1.sup.st column of the j.sup.th image block of the I.sup.D, the value of a 66.sup.th element in the X.sub.j.sup.D is the color value of the G channel of a pixel in the 1.sup.st row and a 2.sup.nd column of the j.sup.th image block of the I.sup.D, and so on; and values of a 129.sup.th element to a 192.sup.nd element in the X.sub.j.sup.D respectively correspond to the color value of the B channel of each pixel in the j.sup.th image block of the I.sup.D in the line-by-line scanning manner, namely the value of the 129.sup.th element in the X.sub.j.sup.D is the color value of the B channel of a pixel in a 1.sup.st row and a 1.sup.st column of the j.sup.th image block of the I.sup.D, the value of a 130.sup.th element in the X.sub.j.sup.D is the color value of the B channel of a pixel in the 1.sup.st row and a 2.sup.nd column of the j.sup.th image block of the I.sup.D, and so on; and

[0049] vectorizing pixel values of all pixels in each image block of the M.sup.R, the M.sup.D, and the M.sup.F; denoting a pixel value vector obtained through vectorizing the pixel values of all the pixels in a j.sup.th image block of the M.sup.R as S.sub.j.sup.R; denoting a pixel value vector obtained through vectorizing the pixel values of all the pixels in a j.sup.th image block of the M.sup.D as S.sub.j.sup.D; and denoting a pixel value vector obtained through vectorizing the pixel values of all the pixels in a j.sup.th image block of the M.sup.F as S.sub.j.sup.F; wherein: the S.sub.j.sup.R, the S.sub.j.sup.D, and the S.sub.j.sup.F all have a dimensionality of 64×1; values of a 1.sup.st element to a 64.sup.th element in the S.sub.j.sup.R respectively correspond to the pixel value of each pixel in the j.sup.th image block of the M.sup.R in the line-by-line scanning manner, namely the value of the 1.sup.st element in the S.sub.j.sup.R is the pixel value of a pixel in a 1.sup.st row and a 1.sup.st column of the j.sup.th image block of the M.sup.R, the value of a 2.sup.nd element in the S.sub.j.sup.R is the pixel value of a pixel in the 1.sup.st row and a 2.sup.nd column of the j.sup.th image block of the M.sup.R, and so on; values of a 1.sup.st element to a 64.sup.th element in the S.sub.j.sup.D respectively correspond to the pixel value of each pixel in the j.sup.th image block of the M.sup.D in the line-by-line scanning manner, namely the value of the 1.sup.st element in the S.sub.j.sup.D is the pixel value of a pixel in a 1.sup.st row and a 1.sup.st column of the j.sup.th image block of the M.sup.D, the value of a 2.sup.nd element in the SD is the pixel value of a pixel in the 1.sup.st row and a 2.sup.nd column of the j.sup.th image block of the M.sup.D, and so on; and values of a 1.sup.st element to a 64.sup.th element in the S.sub.j.sup.F respectively correspond to the pixel value of each pixel in the j.sup.th image block of the M.sup.F in the line-by-line scanning manner, namely the value of the 1.sup.st element in the S.sub.j.sup.F is the pixel value of a pixel in a 1.sup.st row and a 1.sup.st column of the j.sup.th image block of the M.sup.F, the value of a 2.sup.nd element in the S.sub.j.sup.F is the pixel value of a pixel in the 1.sup.st row and a 2.sup.nd column of the j.sup.th image block of the M.sup.F, and so on;

[0050] {circle around (4)} calculating a saliency of each image block in the M.sup.F; and denoting the saliency of the j.sup.th image block in the M.sup.F as d.sub.j,

[00010] d j = .Math. i = 1 64 .Math. S j F ( i ) ,

wherein: 1≦i≦64; and the S.sub.j.sup.F(i) represents a value of an i.sup.th element in the S.sub.j.sup.F, namely a pixel value of an i.sup.th pixel in the j.sup.th image block of the M.sup.F;

[0051] orderly arranging the saliencies of all the image blocks in the M.sup.F, from the biggest to the smallest; and, after arranging, determining sequence numbers of the image blocks corresponding to former t.sub.1 saliencies (namely maximum t.sub.1 saliencies), wherein:

[00011] t 1 = λ 1 × W × H 8 × 8 ;

the λ.sub.1 represents an image block selection proportionality coefficient, λ.sub.1ε(0,1]; and it is embodied that λ.sub.1=0.7 herein; and

[0052] finding the image blocks in the I.sup.R, which are corresponding to the determined t.sub.1 sequence numbers, and defining as reference image blocks; finding the image blocks in the I.sup.D; which are corresponding to the determined t.sub.1 sequence numbers, and defining as distorted image blocks; finding the image blocks in the M.sup.R, which are corresponding to the determined t.sub.1 sequence numbers, and defining as reference saliency image blocks; finding the image blocks in the M.sup.D, which are corresponding to the determined t.sub.1 sequence numbers, and defining as distorted saliency image blocks;

[0053] {circle around (5)} measuring a saliency difference between each reference image block in the I.sup.R and a corresponding distorted image block in the I.sup.D through an absolute difference; and denoting a saliency difference between a t′.sup.th reference image block in the I.sup.R and a t′.sup.th distorted image block in the I.sup.D as e.sub.t′,

[00012] e t = 1 64 .Math. .Math. i = 1 64 .Math. .Math. S ~ t R ( i ) - S ~ t D ( i ) .Math. ,

wherein: the t′ has an initial value of 1, 1≦t′≦t.sub.1; the symbol “∥” is an absolute value symbol; the {tilde over (S)}.sub.t′.sup.R(i) represents a value of an i.sup.th element in a pixel value vector {tilde over (S)}.sub.t′.sup.R corresponding to a t′.sup.th reference saliency image block in the M.sup.R, namely a pixel value of an i.sup.th pixel in the t′.sup.th reference saliency image block of the M.sup.R; and the {tilde over (S)}.sub.t′.sup.D(i) represents a value of an i.sup.th element in a pixel value vector {tilde over (S)}.sub.t′.sup.D corresponding to a t′.sup.th distorted saliency image block in the M.sup.D, namely a pixel value of an i.sup.th pixel in the t′.sup.th distorted saliency image block of the M.sup.D; and

[0054] orderly arranging the measured t.sub.1 saliency differences, from the biggest to the smallest; after arranging, determining the reference image blocks and the distorted image blocks corresponding to former t.sub.2 saliency differences (namely maximum t.sub.2 saliency differences); defining the determined t.sub.2 reference image blocks as reference visual important image blocks, and adopting a matrix formed by color vectors corresponding to all the reference visual important image blocks as a reference visual important image block matrix, denoted as Y.sup.R; defining the determined t.sub.2 distorted image blocks as distorted visual important image blocks, and adopting a matrix formed by color vectors corresponding to all the distorted visual important image blocks as a distorted visual important image block matrix, denoted as Y.sup.D; wherein: t.sub.2=λ.sub.2×t.sub.1; the λ.sub.2 represents a selection proportionality coefficient of the reference image blocks and the distorted image blocks, λ.sub.2ε(0,1]; it is embodied that λ.sub.2=0.6 herein; the Y.sup.R and the Y.sup.D have a dimensionality of 192×t.sub.2; a t″.sup.th column vector in the Y.sup.R is a color vector corresponding to a determined t″.sup.th reference visual important image block; a t″.sup.th column vector in the Y.sup.D is a color vector corresponding to a determined t″.sup.th distorted visual important image block; and the t″ has an initial value of 1, 1≦t″≦t.sub.2;

[0055] {circle around (6)} centralizing the Y.sup.R through subtracting a mean value of values of all elements in each column vector from a value of each element in the same column vector of the Y.sup.R; and denoting an obtained centralized matrix as Y, wherein the Y has a dimensionality of 192×t.sub.2; and

[0056] processing the Y obtained after centralizing the Y.sup.R with dimensionality reduction and whitening through a conventional principal component analysis (PCA); and, denoting an obtained matrix after the dimensionality reduction and the whitening as Y.sup.w, Y.sub.w=W×Y; wherein: the Y.sup.w has a dimensionality of M×t.sub.2; the W represents a whitened matrix and has a dimensionality of M×192, 1<M<<192; the symbol “<<” is a much-less-than symbol; and, it is embodied that the PCA is realized through processing a covariance matrix of sample data with eigenvalue decomposition, namely the Y.sup.w in the step {circle around (6)} is obtained through following steps of:

[0057] {circle around (6)}_1, representing a covariance matrix of the Y by C,

[00013] C = 1 t 2 .Math. ( Y × Y T ) ,

wherein: the C has a dimensionality of 192×192, and the Y.sup.T is a transposition of the Y;

[0058] {circle around (6)}_2, processing the C with the eigenvalue decomposition, and obtaining all maximum eigenvalues and corresponding eigenvectors, wherein the eigenvectors have a dimensionality of 192×1;

[0059] {circle around (6)}_3, choosing M maximum eigenvalues and corresponding M eigenvectors, so as to realize the dimensionality reduction of the Y, wherein: it is embodied that M=8 herein, namely merely former eight principle components are chosen for training and thus the dimensionality is decreased from 192 to M=8;

[0060] {circle around (6)}_4, according to the chosen M maximum eigenvalues and the corresponding M eigenvectors, calculating the whitened matrix W, W=Ψ.sup.−1/2×E.sup.T, wherein: the Ψ has a dimensionality of M×M, Ψ=diag(ψ.sub.1, . . . , ψ.sub.M), Ψ.sup.−1/2=diag(1/√{square root over (ψ.sub.1)}, . . . , 1/√{square root over (ψ.sub.M)}); the E has a dimensionality of 192×M, E=[e.sub.1, . . . , e.sub.M]; the diag( ) is a main-diagonal matrix representation; the ψ.sub.1, . . . , ψ.sub.M correspondingly represent a 1.sup.st to a M.sup.th chosen maximum eigenvalue; and, the e.sub.1, . . . , e.sub.M correspondingly represent a 1.sup.st to a M.sup.th chosen eigenvector; and

[0061] {circle around (6)}_5, according to the W, processing the Y with the whitening, and obtaining the Y.sup.w after the dimensionality reduction and the whitening, Y.sup.w=W×Y;

[0062] {circle around (7)} online training the Y.sup.w through an orthogonal locality preserving projection (OLPP) algorithm; and obtaining a characteristic basis matrix of the Y.sup.w, denoted as D, wherein the D has a dimensionality of M×192;

[0063] {circle around (8)} according to the Y.sup.R and the D, calculating a manifold eigenvector of each reference visual important image block; denoting the manifold eigenvector of a t″.sup.th reference visual important image block as u.sub.t″, u.sub.t″=D×y.sub.t″.sup.R, wherein: the u.sub.t″ has a dimensionality of M×1, and the y.sub.t″.sup.R is the t″.sup.th column vector in the Y.sup.R; according to the Y.sup.D and the D, calculating a manifold eigenvector of each distorted visual important image block; and denoting the manifold eigenvector of a t″.sup.th distorted visual important image block as v.sub.t″, v.sub.t″=D×y.sub.t″.sup.D, wherein: the v.sub.t″ has a dimensionality of M×1, and the y.sub.t″.sup.D is the t″.sup.th column vector in the Y.sup.D; and

[0064] {circle around (9)} according to the manifold eigenvectors of all the reference visual important image blocks and the manifold eigenvectors of all the distorted visual important image blocks, calculating an objective quality assessment value of the I.sup.D, denoted as Score,

[00014] Score = 1 t 2 × M .Math. .Math. t = 1 t 2 .Math. .Math. m = 1 M .Math. 2 × u t ( m ) × v t ( m ) + c ( u t ( m ) ) 2 + ( v t ( m ) ) 2 + c ,

wherein: 1≦m≦M; the u.sub.t″(m) represents a value of an m.sup.th element in the u.sub.t″; the v.sub.t″(m) represents a value of an m.sup.th element in the v.sub.t″; the c is a small constant for guaranteeing a result stability, and it is embodied that c=0.04 herein.

[0065] In order to further illustrate effectiveness and feasibility of the method provided by the present invention, the method is tested.

[0066] According to the preferred embodiment of the present invention, three open authoritative image databases are chosen to be tested, respectively a LIVE image database, a CSIQ image database, a TID2008 image database. In Table 1, various indexes of the three image databases are described in detail. The various indexes comprise a reference image number, a distorted image number and a distorted type number. All of the three databases provide a mean subjective assessment difference of each distorted image.

TABLE-US-00001 TABLE 1 various indexes of authoritative image databases Reference image Distorted image Distorted type Image database number number number LIVE 29 779 5 CSIQ 30 866 6 TID2008 25 1700 17

[0067] Then, a correlation between the objective quality assessment value obtained by the method of the present invention and the mean subjective assessment difference of each distorted image is analyzed. Herein, three common objective parameters for assessing an image quality assessment method serve as assessment indexes. The three objective parameters are respectively a Pearson linear correlation coefficient (PLCC) which reflects a prediction accuracy, a Spearman rank order correlation coefficient (SROCC) which reflects a prediction monotonicity, and a root mean squared error (RMSE) which reflects a prediction consistency. A value range of the PLCC and the SROCC is [0, 1]. The nearer a value of the PLCC and the SROCC approximates to 1, the better an image quality objective assessment method is; otherwise, the image quality objective assessment method is worse. The smaller RMSE, the higher predication accuracy and the better performance of the image quality objective assessment method; otherwise, the predication accuracy is lower and the performance is worse.

[0068] For all distorted images in the above LIVE image database, CSIQ image database and TID2008 image database, the objective quality assessment value of each distorted image is calculated in a same manner through the steps {circle around (1)}-{circle around (9)} of the method provided by the present invention. The obtained correlation between the objective quality assessment value and the mean subjective assessment difference of the distorted image is analyzed. Firstly, the objective quality assessment value is obtained; then, the objective quality assessment value is processed with five-parameter Logistic function non-linear fitting; and finally, a performance index value between an objective assessment result and the mean subjective assessment difference is obtained. In order to verify the effectiveness of the present invention, on the three image databases listed in Table 1, the method provided by the present invention and other six conventional full-reference image quality objective assessment methods having a relatively advanced performance are comparatively analyzed. The PLCC, the SROCC, and the RMSE for representing an assessment performance of the methods are listed in Table 2. In Table 2, the six methods for comparing are respectively a classical peak signal-to-noise ratio (PSNR) method, an assessment method based on a structural similarity (SSIM) proposed by Z. Wang, a method based on a degradation model named Information Fidelity Criterion (IFC) proposed by N. Damera Venkata, a method based on a visual information fidelity (VIF) proposed by H. R. Sheikh, a method based on a wavelet visual signal-to-noise ratio (VSNR) proposed by D. M. Chandler, and an image quality assessment method based on a sparse representation denoted as Sparse Representation-based Quality (SPARQ) proposed by T. Guha. According to data listed in Table 2, on the LIVE image database, the method provided by the present invention has the second best performance after the VIF method; and, on the CSIQ image database and the TID image database, the method provided by the present invention has the best performance. Thus, for all of the above-described three image databases, the objective quality assessment value of the distorted image obtained by the method of the present invention has a good correlation with the mean subjective assessment difference. Moreover, the values of the PLCC and the SROCC of the LIVE image database and the CSIQ image database are all above 0.94; the values of the PLCC and the SROCC of the TID2008 image database having more complex distorted types also reach 0.82; and, after weighted averaging, compared with all of the six conventional methods, the performance of the method provided by the present invention has different degrees of improvement. Thus, the objective assessment result of the method provided by the present invention is relatively consistent with the subjective perception of the human eyes, and has a stable assessment effect, which fully illustrates the effectiveness of the method provided by the present invention.

TABLE-US-00002 TABLE 2 performance comparison between the method provided by the present invention and the conventional image quality objective assessment methods Method of Image present database PSNR SSIM IFC VIF VSNR SPARQ invention LIVE SROCC 0.8756 0.9479 0.9259 0.9636 0.9274 0.9310 0.9523 PLCC 0.8723 0.9449 0.9268 0.9604 0.9231 0.9280 0.9506 RMSE 13.3600 8.9455 10.2641 7.6137 10.5060 10.1850 8.4433 CSIQ SROCC 0.8057 0.8756 0.7671 0.9195 0.8106 0.9460 0.9465 PLCC 0.8000 0.8613 0.8384 0.9277 0.8002 0.9390 0.9433 RMSE 0.1575 0.1344 0.1431 0.0980 0.1575 0.0900 0.0871 TID2008 SROCC 0.5531 0.7749 0.5675 0.7491 0.7046 0.7920 0.8356 PLCC 0.5734 0.7732 0.7340 0.8084 0.6820 0.8200 0.8228 RMSE 1.0994 0.8511 0.9113 0.7899 0.9815 0.7680 0.5975 Average SROCC 0.6936 0.8413 0.7026 0.8432 0.7839 0.8642 0.9115 PLCC 0.7017 0.8360 0.8059 0.8747 0.7687 0.8760 0.9056 RMSE 4.8723 3.3103 3.7728 2.8339 3.8817 3.6810 3.0426

[0069] FIG. 2a shows a (scattered point)-(fitting curve) graph of the method provided by the present invention on the LIVE image database. FIG. 2b shows a (scattered point)-(fitting curve) graph of the method provided by the present invention on the CSIQ image database. FIG. 2c shows a (scattered point)-(fitting curve) graph of the method provided by the present invention on the TID2008 image database. From FIG. 2a, FIG. 2b, and FIG. 2c, it can be clearly seen that the scattered points are uniformly distributed near the fitting curve and show a good monotonicity and consistency.

[0070] One skilled in the art will understand that the embodiment of the present invention as shown in the drawings and described above is exemplary only and not intended to be limiting.

[0071] It will thus be seen that the objects of the present invention have been fully and effectively accomplished. Its embodiments have been shown and described for the purposes of illustrating the functional and structural principles of the present invention and is subject to change without departure from such principles. Therefore, this invention includes all modifications encompassed within the spirit and scope of the following claims.