Image characteristic estimation method and device

10115208 ยท 2018-10-30

Assignee

Inventors

Cpc classification

International classification

Abstract

An image characteristic estimation method and device is presented, where content of the method includes extracting at least two eigenvalues of input image data, and executing the following operations for each extracted eigenvalue, until execution for the extracted eigenvalues is completed. Selecting an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter in order to obtain a first matrix vector corresponding to the eigenvalue; when a first matrix vector corresponding to each extracted eigenvalue is obtained, obtaining second matrix vectors with respect to the at least two extracted eigenvalues using a convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue; and obtaining a status of an image characteristic in the image data by means of estimation according to the second matrix vectors. In this way, accuracy of estimation is effectively improved.

Claims

1. An image characteristic estimation method of a human body characteristic in the field of human body recognition, comprising: extracting at least two eigenvalues of input image data, the eigenvalue comprising at least a degree of matching between each characteristic and a corresponding template characteristic, a value of a probability that any two characteristics in the image data appear on a same position at the same time, and a value of a score of a change in a distance between two characteristics that have an association relationship; selecting an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter in order to obtain a first matrix vector corresponding to the eigenvalue; obtaining second matrix vectors with respect to the at least two extracted eigenvalues using a convolutional network calculation method according to a first matrix vector corresponding to each eigenvalue when the first matrix vector corresponding to each extracted eigenvalue is obtained; and obtaining a status of an image characteristic in the image data by means of estimation according to the second matrix vectors for improved accuracy of estimation, wherein selecting the eigenvalue, obtaining the second matrix vectors, and obtaining the status of the image characteristic are executed for each extracted eigenvalue until execution for the extracted eigenvalues is completed.

2. The method of claim 1, wherein selecting the eigenvalue, and performing the at least two matrix transformations on the eigenvalue using the pre-obtained matrix parameter, to obtain the first matrix vector corresponding to the eigenvalue comprises: selecting an eigenvalue, and performing a first matrix transformation on the eigenvalue using the pre-obtained matrix parameter in order to obtain a first submatrix vector corresponding to the eigenvalue; and iteratively performing, from 2 to N, an N.sup.th matrix transformation on the (N1).sup.th submatrix vector using the pre-obtained matrix parameter in order to obtain the first matrix vector corresponding to the eigenvalue, N being a natural number.

3. The method of claim 2, wherein performing the first matrix transformation on the eigenvalue using the pre-obtained matrix parameter in order to obtain the first submatrix vector corresponding to the eigenvalue comprises obtaining the first submatrix vector corresponding to the eigenvalue by:
h.sup.1,i=a(i.sup.T*W.sup.1,i+b.sup.1,i), h.sup.1,i representing the first submatrix vector corresponding to the i.sup.th extracted eigenvalue, a being an activation function, i.sup.T being a transposed matrix of the i.sup.th eigenvalue, W.sup.1,i being the first matrix with respect to the i.sup.th eigenvalue in the matrix parameter, and b.sup.1,i being the first offset with respect to the i.sup.th eigenvalue.

4. The method of claim 3, wherein performing the second matrix transformation on the first submatrix vector using the pre-obtained matrix parameter in order to obtain the second submatrix vector corresponding to the eigenvalue comprises obtaining the second submatrix vector corresponding to the eigenvalue by:
h.sup.2,i=a((h.sup.1,i).sup.T*W.sup.2,i+(b.sup.2,i).sup.T), h.sup.2,i representing the second submatrix vector corresponding to the i.sup.th extracted eigenvalue, a being an activation function, (h.sup.1,i).sup.T being a transposed matrix of the first submatrix vector of the i.sup.th eigenvalue, W.sup.2,i being the second matrix with respect to the i.sup.th eigenvalue in the matrix parameter, and (b.sup.2,i).sup.T being the second offset with respect to the i.sup.th eigenvalue.

5. The method of claim 1, wherein obtaining the second matrix vectors with respect to the at least two extracted eigenvalues using the convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue comprises obtaining, using the convolutional network calculation method, the second matrix vectors with respect to the at least two extracted eigenvalues by:
h.sup.n+1=a((h.sup.n).sup.T*W.sup.n+1+b.sup.n+1), h.sup.n+1 being the second matrix vectors that are with respect to the at least two extracted eigenvalues and obtained using the convolutional network calculation method, a being an activation function, h.sup.n=[h.sup.n,1,h.sup.n,2, . . . ,h.sup.n,i, . . . ,h.sup.n,n].sup.T, h.sup.n,n being a first matrix vector that is of the n.sup.th eigenvalue and on which an n.sup.th matrix transformation is performed, W.sup.n+1 being the (n+1).sup.th matrix in the matrix parameter, and b.sup.n+1 being the (n+1).sup.th offset.

6. The method of claim 1, wherein the status of the image characteristic in the image data comprises position information of the image characteristic in the image data, and obtaining the status of the image characteristic in the image data by means of estimation according to the second matrix vectors comprising obtaining the status of the image characteristic in the image data by means of estimation based on:
{tilde over (y)}.sup.pst=({tilde over (h)}.sup.n).sup.T*W.sup.pst+b.sup.pst, {tilde over (y)}.sup.pst being the status, obtained by means of estimation, of the image characteristic in the image data, W.sup.pst being a theoretical matrix parameter, b.sup.pst being a theoretical offset, {tilde over (h)}.sup.n being obtained according to h.sup.n, and h.sup.n=[h.sup.n,1,h.sup.n,2, . . . ,h.sup.n,i, . . . ,h.sup.n,n].sup.T.

7. The method of claim 1, further comprising determining reliability of the status obtained by means of estimation according to the second matrix vectors.

8. The method of claim 7, wherein determining reliability of the status obtained by means of estimation according to the second matrix vectors comprises determining the reliability of the status obtained by means of estimation by:
{tilde over (y)}.sup.cls=*((h.sup.n).sup.T*W.sup.cls+b.sup.cls), {tilde over (y)}.sup.cls being the determined reliability of the status obtained by means of estimation, being a function, (x)=(1+exp(x)).sup.1, W.sup.cls being a theoretical matrix parameter, and b.sup.cls being a theoretical offset.

9. An image characteristic estimation device for estimating a human body characteristic in the field of human body recognition, comprising: a memory storing executable instructions; and a processor coupled to the memory and configured to: extract at least two eigenvalues of input image data, the eigenvalue comprising at least a degree of matching between each characteristic and a corresponding template characteristic, a value of a probability that any two characteristics in the image data appear on a same position at the same time, and a value of a score of a change in a distance between two characteristics that have an association relationship; and execute the following operations for each eigenvalue extracted, until execution for the extracted eigenvalues is completed: selecting an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter in order to obtain a first matrix vector corresponding to the eigenvalue; obtain second matrix vectors with respect to the at least two extracted eigenvalues using a convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue when the first matrix vector that corresponds to each extracted eigenvalue and is obtained by means of calculation; and obtain a status of an image characteristic in the image data by means of estimation according to the second matrix vectors obtained by means of calculation for improved accuracy of estimation.

10. The device of claim 9, wherein the processor is further configured to: select an eigenvalue, and perform a first matrix transformation on the eigenvalue using the pre-obtained matrix parameter in order to obtain a first submatrix vector corresponding to the eigenvalue; perform a second matrix transformation on the first submatrix vector using the pre-obtained matrix parameter in order to obtain a second submatrix vector corresponding to the eigenvalue; and perform, by analogy, an N.sup.th matrix transformation on the (N1).sup.th submatrix vector using the pre-obtained matrix parameter in order to obtain the first matrix vector corresponding to the eigenvalue, N being a natural number.

11. The device of claim 10, wherein the processor is configured to obtain the first submatrix vector corresponding to the eigenvalue by:
h.sup.1,i=a(i.sup.T*W.sup.1,i+b.sup.1,i), h.sup.1,i representing the first submatrix vector corresponding to the i.sup.th extracted eigenvalue, a being an activation function, i.sup.T being a transposed matrix of the i.sup.th eigenvalue, W.sup.1,i being the first matrix with respect to the i.sup.th eigenvalue in the matrix parameter, and b.sup.1,i being the first offset with respect to the i.sup.th eigenvalue.

12. The device of claim 11, wherein the processor is further configured to obtain the second submatrix vector corresponding to the eigenvalue by:
h.sup.2,i=a((h.sup.1,i).sup.T*W.sup.2,i+(b.sup.2,i).sup.T), h.sup.2,i representing the second submatrix vector corresponding to the i.sup.th extracted eigenvalue, a is an activation function, (h.sup.1,i).sup.T being a transposed matrix of the first submatrix vector of the i.sup.th eigenvalue, W.sup.2,i being the second matrix with respect to the i.sup.th eigenvalue in the matrix parameter, and wherein (b.sup.2,i).sup.T being the second offset with respect to the i.sup.th eigenvalue.

13. The device of claim 9, wherein the processor is further configured to obtain, using the convolutional network calculation method, the second matrix vectors with respect to the at least two extracted eigenvalues by:
h.sup.n+1=a((h.sup.n).sup.T*W.sup.n+1+b.sup.n+1), h.sup.n+1 being the second matrix vectors that are with respect to the at least two extracted eigenvalues and obtained using the convolutional network calculation method, a being an activation function, h.sup.n=[h.sup.n,1,h.sup.n,2, . . . ,h.sup.n,i, . . . ,h.sup.n,n].sup.T, h.sup.n,n being a first matrix vector that is of the n.sup.th eigenvalue and on which an n.sup.th matrix transformation is performed, W.sup.n+1 being the (n+1).sup.th matrix in the matrix parameter, and b.sup.n+1 being the (n+1).sup.th offset.

14. The device of claim 9, wherein the status of the image characteristic in the image data comprises position information of the image characteristic in the image data, and the processor being further configured to obtain the status of the image characteristic in the image data by means of estimation by:
{tilde over (y)}.sup.pst=({tilde over (h)}.sup.n).sup.T*W.sup.pst+b.sup.pst, {tilde over (y)}.sup.pst being the status, obtained by means of estimation, of the image characteristic in the image data, W.sup.pst being a theoretical matrix parameter, b.sup.pst being a theoretical offset, {tilde over (h)}.sup.n being obtained according to h.sup.n, and h.sup.n=[h.sup.n,1,h.sup.n,2, . . . ,h.sup.n,i, . . . ,h.sup.n,n].sup.T.

15. The device of claim 9, wherein the processor is further configured to determine reliability of the status obtained by means of estimation according to the second matrix vectors obtained by means of calculation.

16. The device of claim 15, wherein the processor is further configured to determine the reliability of the status obtained by means of estimation by:
{tilde over (y)}.sup.cls=*((h.sup.n).sup.T*W.sup.cls+b.sup.cls), {tilde over (y)}.sup.cls being the determined reliability of the status obtained by means of estimation, being a function, (x)=(1+exp(x)).sup.1, W.sup.cls being a theoretical matrix parameter, and b.sup.cls being a theoretical offset.

Description

BRIEF DESCRIPTION OF DRAWINGS

(1) To describe the technical solutions in the embodiments of the present disclosure more clearly, the following briefly describes the accompanying drawings required for describing the embodiments. The accompanying drawings in the following description show merely some embodiments of the present disclosure, and persons of ordinary skill in the art may still derive other drawings from these accompanying drawings without creative efforts.

(2) FIG. 1 is a schematic flowchart of an image characteristic estimation method according to the present disclosure;

(3) FIG. 2 is a schematic structural diagram of an image characteristic estimation device according to the present disclosure; and

(4) FIG. 3 is a schematic structural diagram of an image characteristic estimation device according to the present disclosure.

DESCRIPTION OF EMBODIMENTS

(5) To achieve the objectives of the present disclosure, embodiments of the present disclosure provide an image characteristic estimation method and device. At least two eigenvalues of input image data are extracted, where the eigenvalue includes at least a degree of matching between each characteristic and a corresponding template characteristic, a value of a probability that any two characteristics in the image data appear on a same position at the same time, and a value of a score of a change in a distance between two characteristics that have an association relationship; the following operations are executed for each extracted eigenvalue, until execution for the extracted eigenvalues is completed: selecting an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter, to obtain a first matrix vector corresponding to the eigenvalue; when a first matrix vector corresponding to each extracted eigenvalue is obtained, second matrix vectors with respect to the at least two extracted eigenvalues are obtained using a convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue; and a status of an image characteristic in the image data is obtained by means of estimation according to the second matrix vectors. In this way, for multiple different eigenvalues obtained by means of extraction, multiple matrix transformations are performed for each eigenvalue, and a combination vector is obtained in a manner of convolutional network calculation on a matrix vector obtained after transformations of each eigenvalue; finally, estimation is performed on the image characteristic in the image data in a fully-connected belief network calculation manner, which effectively improves accuracy of estimation.

(6) The following further describes the present disclosure in detail with reference to the accompanying drawings. The described embodiments are merely some but not all of the embodiments of the present disclosure. All other embodiments obtained by persons of ordinary skill in the art based on the embodiments of the present disclosure without creative efforts shall fall within the protection scope of the present disclosure.

(7) As shown in FIG. 1, FIG. 1 is a schematic flowchart of an image characteristic estimation method according to the present disclosure. The method may be described as follows.

(8) Step 101: Extract at least two eigenvalues of input image data.

(9) The eigenvalue includes at least a degree of matching between each characteristic and a corresponding template characteristic, a value of a probability that any two characteristics in the image data appear on a same position at the same time, and a value of a score of a change in a distance between two characteristics that have an association relationship.

(10) In step 101, the input image data is received, where the image data may be picture data, or may be video data, or may be a combination of picture data and video data, which is not limited herein.

(11) The received image data needs to be processed in a manner of image data characteristic detection, so that processed image data is concentrated in a relatively small range. An image characteristic in the image data is concentrated in the relatively small range, which lays a foundation for subsequent steps.

(12) The at least two eigenvalues of the received image data are extracted, where the eigenvalue obtained by means of extraction includes at least the degree of matching between each characteristic and the corresponding template characteristic, the value of the probability that any two characteristics in the image data appear on the same position at the same time, and the value of the score of the change in the distance between two characteristics that have an association relationship.

(13) The degree of matching between each characteristic and the corresponding template characteristic refers to a degree of matching between a characteristic and a template characteristic corresponding to the characteristic, and can be obtained in the following manner:
S=S.sub.a(I,t.sub.p,z.sub.p)=(w.sub.p.sup.t.sup.p).sup.T*f(I,z.sub.p),
where S refers to the degree of matching between each characteristic and the corresponding template characteristic, S(I,t,z) is a matching degree function,

(14) S ( I , t , z ) = S c ( t ) + .Math. p , q S d ( t , z , p , q ) + .Math. p S a ( I , t p , z p ) ,
I is the received image data, t.sub.p is an appearance mixture type (appearance mixture type) of the p.sup.th characteristic, z.sub.p is a position of the p.sup.th characteristic, w.sub.p.sup.t.sup.p is a theoretical parameter, and f(I,z.sub.p) is a function for calculating an eigenvalue, in the image data I, whose position meets z.sub.p, where p and q are quantities of characteristics.

(15) For example, it is assumed that the received image data is image data that includes a human body characteristic, and the characteristic is a human eye; the received human eye characteristic is extracted. In this case, matching is performed between an image characteristic of a position of an eye in the image data and a template characteristic of an eye in an eye characteristic library, to determine an eigenvalue of the eye characteristic in the received image data.

(16) The value of the probability that any two characteristics in the image data appear on the same position at the same time refers to a probability that any two characteristics appear on a same position at the same time and that is obtained by means of calculation using a formula for calculating a possibility that different characteristics trained in advance appear at the same time.

(17) The formula for calculating the possibility that different characteristics trained in advance appear at the same time is as follows:
t=S.sub.c(t)=.sub.pb.sub.p.sup.t.sup.p+.sub.p,qb.sub.p,q.sup.t.sup.p.sup.,t.sup.q,
where t is a probability that different characteristics appear at the same time, S.sub.c(t) represents a function for calculating the probability, b.sub.p.sup.t.sup.p represents a score of the p.sup.th characteristic appearing in the t.sub.p.sup.th appearance mixture type, .sub.pb.sub.p.sup.t.sup.p represents a sum of scores of multiple characteristics appearing in the t.sub.p.sup.th appearance mixture type at the same time, b.sub.p,q.sup.t.sup.p.sup.,t.sup.q represents a score of the p.sup.th characteristic and the q.sup.th characteristic appearing at the same time, and .sub.p,qb.sub.p,q.sup.t.sup.p.sup.,t.sup.q represents a sum of scores of multiple characteristics appearing at the same time.

(18) For example, it is assumed that the received image data is image data that includes a human body characteristic, and the characteristic is a human eye and a human eyebrow; the received human eye characteristic and human eyebrow characteristic are extracted, and a value of a probability that the human eye characteristic and the human eyebrow characteristic appear on a same position is calculated.

(19) The value of the score of the change in the distance between two characteristics that have an association relationship refers to a proper value of a score of the change in the distance between two characteristics that have an association relationship. For further estimation of the image characteristic in the image data, because the change in the distance between two characteristics that have an association relationship falls within a proper change range, whether a distance between two characteristics that have an association relationship changes and a change status need to be extracted.

(20) A manner of calculating the value of the score of the change in the distance between two characteristics that have an association relationship includes but is not limited to the following manner:
d=S.sub.d(z.sub.p,z.sub.q)=(w.sub.p,q.sup.t.sup.p.sup.,t.sup.q).sup.T*d(z.sub.pz.sub.q),
where w.sub.p,q.sup.t.sup.p.sup.,t.sup.q a matrix parameter, d(z.sub.pz.sub.q)=[dx,dy,dx.sup.2,dy.sup.2].sup.T, dx=X.sub.pX.sub.q represents a difference, in an X direction, between the p.sup.th characteristic and the q.sup.th characteristic, and dy=y.sub.py.sub.q represents a difference, in a y direction, between the p.sup.th characteristic and the q.sup.th characteristic.

(21) For example, it is assumed that the received image data is image data that includes a human body characteristic, and the characteristic is a human eye and a human eyebrow; the received human eye characteristic and human eyebrow characteristic are extracted, and a value of a score of a change in a distance between the human eye characteristic and human eyebrow characteristic is calculated. A change in an expression of a current person may be determined according to the value of the score. For example, a relatively small distance score indicates a slightly small change of an expression; a relatively large distance score indicates a large change of an expression, which indicates either of happiness or sadness, and the like.

(22) Step 102: Execute the following operations for each extracted eigenvalue, until execution for the extracted eigenvalues is completed: selecting an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter, to obtain a first matrix vector corresponding to the eigenvalue.

(23) In step 102, when multiple eigenvalues are obtained in step 101, each obtained eigenvalue is processed according to a set calculation sequence.

(24) The selecting an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter, to obtain a first matrix vector corresponding to the eigenvalue includes selecting an eigenvalue, and performing a first matrix transformation on the eigenvalue using the pre-obtained matrix parameter, to obtain the first submatrix vector corresponding to the eigenvalue; performing a second matrix transformation on the first submatrix vector using the pre-obtained matrix parameter, to obtain the second submatrix vector corresponding to the eigenvalue; and by analogy, performing an N.sup.th matrix transformation on the (N1).sup.th submatrix vector using the pre-obtained matrix parameter, to obtain the first matrix vector corresponding to the eigenvalue, where N is a natural number.

(25) The performing a first matrix transformation on the eigenvalue using the pre-obtained matrix parameter, to obtain the first submatrix vector corresponding to the eigenvalue includes obtaining the first submatrix vector corresponding to the eigenvalue in the following manner:
h.sup.1,i=a(i.sup.T*W.sup.1,i+b.sup.1,i),
where h.sup.1,i represents the first submatrix vector corresponding to the i.sup.th extracted eigenvalue, a is an activation function, i.sup.T is a transposed matrix of the i.sup.th eigenvalue, W.sup.1,i is the first matrix with respect to the i.sup.th eigenvalue in the matrix parameter, and b.sup.1,i is the first offset with respect to the i.sup.th eigenvalue.

(26) The performing a second matrix transformation on the first submatrix vector using the pre-obtained matrix parameter, to obtain the second submatrix vector corresponding to the eigenvalue includes obtaining the second submatrix vector corresponding to the eigenvalue in the following manner:
h.sup.2,i=a((h.sup.1,i).sup.T*W.sup.2,i+(b.sup.2,i).sup.T),
where h.sup.2,i represents the second submatrix vector corresponding to the i.sup.th extracted eigenvalue, a is an activation function, (h.sup.1,i).sup.T is a transposed matrix of the first submatrix vector of the i.sup.th eigenvalue, W.sup.2,i is the second matrix with respect to the i.sup.th eigenvalue in the matrix parameter, and (b.sup.2,i).sup.T is the second offset with respect to the i.sup.th eigenvalue.

(27) The performing an N.sup.th matrix transformation on the (N1).sup.th submatrix vector using the pre-obtained matrix parameter, to obtain the first matrix vector corresponding to the eigenvalue includes obtaining the first matrix vector corresponding to the eigenvalue in the following manner:
h.sup.n,i=a((h.sup.n1,i).sup.T*W.sup.n,i+b.sup.n,i),
where h.sup.n,i represents a first matrix vector corresponding to the i.sup.th extracted eigenvalue, a is an activation function, (h.sup.n1,i).sup.T is a transposed matrix of the (N1).sup.th submatrix vector of the i.sup.th eigenvalue, W.sup.n,i is the N.sup.th matrix with respect to the i.sup.th eigenvalue in the matrix parameter, and b.sup.n,i is the N.sup.th offset with respect to the i.sup.th eigenvalue.

(28) It is assumed that there are three extracted eigenvalues, which are respectively an eigenvalue 1 (denoted by s), an eigenvalue 2 (denoted by d), and an eigenvalue 3 (denoted by t). After two matrix transformations, a first matrix vector of the eigenvalue 1, a first matrix vector of the eigenvalue 2, and a first matrix vector of the eigenvalue 3 are respectively obtained.

(29) For the eigenvalue 1, after a first matrix transformation, h.sup.1,1=a(s.sup.T*W.sup.1,1+b.sup.1,1) is obtained; after a second matrix transformation, h.sup.2,1=a((h.sup.1,1).sup.T*W.sup.2,1+(b.sup.2,1).sup.T) is obtained, that is, the first matrix vector of the eigenvalue 1 is obtained.

(30) For the eigenvalue 2, after a first matrix transformation, h.sup.1,2=a(d.sup.T*W.sup.1,2+b.sup.1,2) is obtained; after a second matrix transformation, h.sup.2,2=a((h.sup.1,2).sup.T*W.sup.2,2+(b.sup.2,2).sup.T) is obtained, that is, the first matrix vector of the characteristic 2 is obtained.

(31) For eigenvalue 3, after a first matrix transformation, h.sup.1,3=a(t.sup.T*W.sup.1,3+b.sup.1,3) is obtained; after a second matrix transformation, h.sup.2,3=a(h.sup.1,3).sup.T*W.sup.2,3+(b.sup.2,3).sup.T) is obtained, that is, the first matrix vector of the eigenvalue 3 is obtained.

(32) Step 103: When a first matrix vector corresponding to each extracted eigenvalue is obtained, obtain second matrix vectors with respect to the at least two extracted eigenvalues using a convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue.

(33) In step 103, the obtaining second matrix vectors with respect to the at least two extracted eigenvalues using a convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue includes obtaining, using the convolutional network calculation method, the second matrix vectors with respect to the at least two extracted eigenvalues in the following manner:
h.sup.n+1=a((h.sup.n).sup.T*W.sup.n+1+b.sup.n+1), where

(34) h.sup.n+1 is the second matrix vectors that are with respect to the at least two extracted eigenvalues and obtained using the convolutional network calculation method, a is an activation function, h.sup.n=[h.sup.n,1,h.sup.n,2, . . . ,h.sup.n,i, . . . ,h.sup.n,n].sup.T, h.sup.n,n is a first matrix vector that is of the n.sup.th eigenvalue and on which an n.sup.th matrix transformation is performed, W.sup.n+1 is the (n+1).sup.th matrix in the matrix parameter, and b.sup.n+1 is the (n+1).sup.th offset.

(35) It should be noted that a is an activation function, and when an activation threshold is 3.5, a([2,3,4])[0,0,1], a sigmoid function is recommended for use.

(36) It is assumed that there are three extracted eigenvalues, which are respectively an eigenvalue 1 (denoted by s), an eigenvalue 2 (denoted by d), and an eigenvalue 3 (denoted by t). After two matrix transformations, a first matrix vector of the eigenvalue 1, a first matrix vector of the eigenvalue 2, and a first matrix vector of the eigenvalue 3 are respectively obtained. The second matrix vectors with respect to the at least two extracted eigenvalues are obtained using the convolutional network calculation method according to the obtained first matrix vector of the eigenvalue 1, the obtained first matrix vector of the eigenvalue 2, and the obtained first matrix vector of the eigenvalue 3:
h.sup.3=a((h.sup.2).sup.T*W.sup.3+b.sup.3),
where h.sup.3 is the second matrix vectors that are with respect to the at least two extracted eigenvalues and obtained using the convolutional network calculation method, a is an activation function, h.sup.2=[h.sup.2,1,h.sup.2,2,h.sup.2,3].sup.T, W.sup.3 is the third matrix in the matrix parameter, and b.sup.3 is the third offset.

(37) Step 104: Obtain a status of an image characteristic in the image data by means of estimation according to the second matrix vectors.

(38) In step 104, after the second matrix vectors are obtained, the status of the image characteristic in the image data is obtained by means of estimation according to the second matrix vectors, where the status of the image characteristic in the image data includes position information of the image characteristic in the image data.

(39) The status of the image characteristic in the image data is obtained by means of estimation in the following manner:
{tilde over (y)}.sup.pst=({tilde over (h)}.sup.n).sup.T*W.sup.pst+b.sup.pst,
where {tilde over (y)}.sup.pst is the status, obtained by means of estimation, of the image characteristic in the image data, W.sup.pst is a theoretical matrix parameter, b.sup.pst is a theoretical offset, {tilde over (h)}.sup.n is obtained according to h.sup.n, and h.sup.n=[h.sup.n,1,h.sup.n,2, . . . ,h.sup.n,i, . . . ,h.sup.n,n].sup.T.

(40) In another embodiment of the present disclosure, the method further includes determining reliability of the status obtained by means of estimation according to the second matrix vectors.

(41) The reliability of the status obtained by means of estimation is determined in the following manner:
{tilde over (y)}.sup.cls=*((h.sup.n).sup.T*W.sup.cls+b.sup.cls),
where {tilde over (y)}.sup.cls is the determined reliability of the status obtained by means of estimation, is a function, (x)=(1+exp(x)).sup.1, W.sup.cls is a theoretical matrix parameter, and b.sup.cls is a theoretical offset.

(42) When the reliability of the status obtained by means of estimation is calculated, the reliability obtained by means of calculation is compared with a set reliability threshold to further determine accuracy of an estimation result.

(43) In addition, after the second matrix vectors are obtained, the estimated status and the reliability of the estimated status may be obtained by means of calculation using a fully-connected belief network, which further improves accuracy of an estimation result.

(44) It should be noted that this embodiment of the present disclosure involves a W-type matrix (denoted by W*) associated with a W-matrix parameter and a b-type offset (denoted by b*) associated with a b-offset parameter, which may be obtained in a training manner, or may be set according to a practical need, and is not limited herein.

(45) A method for obtaining W*, b*, W.sup.cls, and b.sup.cls in the training manner may be described as follows.

(46) In the first step, positive sample image data and negative sample image data are input and are clustered into k groups using a k-means method, where k is a set integer.

(47) In the second step, values of W*, b*, W.sup.cls, and b.sup.cls are obtained using a Restricted Boltzmann Machine (RBM) training method in a well-known technology.

(48) In the third step, values of W*, b*, W.sup.cls, and b.sup.cls are obtained again by means of calculation using a target function and a backpropagation (BP) algorithm in a well-known technology.

(49) The target function may be:
J()=.sub.n(J.sub.1(y.sub.n.sup.cls,{tilde over (y)}.sub.n.sup.cls)+y.sub.n.sup.clsJ.sub.2(y.sub.n.sup.pst,{tilde over (y)}.sub.n.sup.pst)+J.sub.3(w*,w.sup.cls),
where
J.sub.1(y.sub.n.sup.cls,{tilde over (y)}.sub.n.sup.cls)=y.sub.n.sup.cls log({tilde over (y)}.sub.n.sup.cls)+(1y.sub.n.sup.cls)log(1{tilde over (y)}.sub.n.sup.cls); J.sub.2(y.sub.n.sup.pst,{tilde over (y)}.sub.n.sup.pst)=y.sub.n.sup.pst{tilde over (y)}.sub.n.sup.pst.sup.2; and
J.sub.3(w*,w.sup.cls)=.sub.i,j|w.sub.i,j*|+.sub.i|w.sub.i.sup.cls|,
where n is an integer, and a value of n is a quantity of training samples.

(50) y.sub.n.sup.cls is set estimated reliability of an image characteristic of the n.sup.th training sample, {tilde over (y)}.sub.n.sup.cls is estimated reliability, obtained by means of calculation, of the n.sup.th training sample, y.sub.n.sup.pst is a set estimated status of the image characteristic of the n.sup.th training sample, {tilde over (y)}.sub.n.sup.pst is an estimated status, obtained by means of calculation, of the n.sup.th training sample, w.sub.i,j* is a value of the i.sup.th row and the j.sup.th column in W* that is obtained by means of calculation in the second step, and w.sub.i.sup.cls is the i.sup.th value in W.sup.cls that is obtained by means of calculation in the second step.

(51) It should be noted that the third step is solved by means of gradient descent, the values of W*, b*, W.sub.cls, and b.sup.cls that are obtained in the second step are used as initial points for the gradient descent in the third step, and then, new values of W*, b*, W.sup.cls, and b.sup.cls are obtained using a gradient descent method in the third step.

(52) According to the solution in Embodiment 1 of the present disclosure, at least two eigenvalues of input image data are extracted, where the eigenvalue includes at least a degree of matching between each characteristic and a corresponding template characteristic, a value of a probability that any two characteristics in the image data appear on a same position at the same time, and a value of a score of a change in a distance between two characteristics that have an association relationship; the following operations are executed for each extracted eigenvalue, until execution for the extracted eigenvalues is completed: selecting an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter, to obtain a first matrix vector corresponding to the eigenvalue; when a first matrix vector corresponding to each extracted eigenvalue is obtained, second matrix vectors with respect to the at least two extracted eigenvalues are obtained using a convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue; and a status of an image characteristic in the image data is obtained by means of estimation according to the second matrix vectors. In this way, for multiple different eigenvalues obtained by means of extraction, multiple matrix transformations are performed for each eigenvalue, and a combination vector is obtained in a manner of convolutional network calculation on a matrix vector obtained after transformations of each eigenvalue; finally, estimation is performed on the image characteristic in the image data in a fully-connected belief network calculation manner, which effectively improves accuracy of estimation.

(53) The following describes the foregoing embodiment using a group of experimental data.

(54) It is assumed that there are three extracted eigenvalues of the input image data, which are respectively a degree s of matching between each characteristic and a corresponding template characteristic, a value t of a probability that any two characteristics in the image data appear on a same position at the same time, and a value d of a score of a change in a distance between two characteristics that have an association relationship.

(55) It is assumed that 26 characteristics are included in an experiment, and each characteristic corresponds to seven mixture types (mixture type); then, corresponding s has 26*7=182 dimensions, corresponding t has 26*7=182 dimensions, and corresponding d has 26*6=156 dimensions.

(56) It should be noted that, because relative displacement is generated between a characteristic and other three characteristics, and each displacement is represented using two-dimensional data, the other three characteristics include six-dimensional data. In this case, for the 26 characteristics, corresponding d has 26*6=156 dimensions.

(57) It is assumed that there are two types of s obtained in the experiment, where one type belongs to a visual matching score (appearance score), and the other type belongs to a deformation matching score (deformation score).

(58) When s is of a vision matching score, s scores 0.2, and if a mixture type corresponding to the characteristic 1 is 2, an obtained seven-dimensional vector of the characteristic 1 is [0 0.2 0 0 0 0 0].

(59) When s is of a deformation matching score, if a mixture type corresponding to the characteristic 1 is 2, s scores 0.4, and an obtained seven-dimensional vector of the characteristic 1 is [0 0.4 0 0 0 0 0].

(60) In addition, if the mixture type corresponding to the characteristic 1 is 2, correspondingly an obtained seven-dimensional vector of s (vision matching score) is [0 0.2 0 0 0 0 0], and correspondingly an obtained seven-dimensional vector of t is [0 1 0 0 0 0 0].

(61) In this way, data that is input into a computer for calculation includes (182+182+182+156)-dimensional data, that is, 702-dimensional data. After a first matrix transformation is performed, a first submatrix vector of s corresponds to 140-dimensional data, a first submatrix vector of d corresponds to 30-dimensional data, and a first submatrix vector of t corresponds to 30-dimensional data. After a second matrix transformation is performed, a second submatrix vector of s corresponds to 120-dimensional data, a second submatrix vector of d corresponds to 15-dimensional data, and a second submatrix vector of t corresponds to 15-dimensional data. Then, a second matrix vector corresponding to 100-dimensional data is obtained using a convolutional network calculation method.

(62) It may be learned that, as a quantity of matrix transformations increases, a data volume for calculation is reduced, which not only changes complexity of calculation, but also can effectively improve accuracy of calculation, thereby improving accuracy of an estimation result.

(63) As shown in FIG. 2, FIG. 2 is a schematic structural diagram of an image characteristic estimation device according to an embodiment of the present disclosure. The device includes an extraction module 21, a first matrix vector calculation module 22, a second matrix vector calculation module 23, and an estimation module 24.

(64) The extraction module 21 is configured to extract at least two eigenvalues of input image data, where the eigenvalue includes at least a degree of matching between each characteristic and a corresponding template characteristic, a value of a probability that any two characteristics in the image data appear on a same position at the same time, and a value of a score of a change in a distance between two characteristics that have an association relationship.

(65) The first matrix vector calculation module 22 is configured to execute the following operations for each eigenvalue extracted by the extraction module 21, until execution for the extracted eigenvalues is completed: selecting an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter, to obtain a first matrix vector corresponding to the eigenvalue.

(66) The second matrix vector calculation module 23 is configured to, when the first matrix vector that corresponds to each extracted eigenvalue and is obtained by means of calculation by the first matrix vector calculation module 22 is obtained, obtain second matrix vectors with respect to the at least two extracted eigenvalues using a convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue.

(67) The estimation module 24 is configured to obtain a status of an image characteristic in the image data by means of estimation according to the second matrix vectors obtained by means of calculation by the second matrix vector calculation module 23.

(68) The first matrix vector calculation module 22 is configured to select an eigenvalue, and perform a first matrix transformation on the eigenvalue using the pre-obtained matrix parameter, to obtain the first submatrix vector corresponding to the eigenvalue; perform a second matrix transformation on the first submatrix vector using the pre-obtained matrix parameter, to obtain the second submatrix vector corresponding to the eigenvalue; and by analogy, perform an N.sup.th matrix transformation on the (N1).sup.th submatrix vector using the pre-obtained matrix parameter, to obtain the first matrix vector corresponding to the eigenvalue, where N is a natural number.

(69) The first matrix vector calculation module 22 is configured to obtain the first submatrix vector corresponding to the eigenvalue in the following manner:
h.sup.1,i=a(i.sup.T*W.sup.1,i+b.sup.1,i),
where h.sup.1,i represents the first submatrix vector corresponding to the i.sup.th extracted eigenvalue, a is an activation function, i.sup.T is a transposed matrix of the i.sup.th eigenvalue, W.sup.1,i is the first matrix with respect to the i.sup.th eigenvalue in the matrix parameter, and b.sup.1,i is the first offset with respect to the i.sup.th eigenvalue.

(70) The first matrix vector calculation module 22 is configured to obtain the second submatrix vector corresponding to the eigenvalue in the following manner:
h.sup.2,i=a((h.sup.1,i).sup.T*W.sup.2,i+(b.sup.2,i).sup.T),
where h.sup.2,i represents the second submatrix vector corresponding to the i.sup.th extracted eigenvalue, a is an activation function, (h.sup.1,i).sup.T is a transposed matrix of the first submatrix vector of the i.sup.th eigenvalue, W.sup.2,i is the second matrix with respect to the i.sup.th eigenvalue in the matrix parameter, and (b.sup.2,i).sup.T is the second offset with respect to the i.sup.th eigenvalue.

(71) The second matrix vector calculation module 23 is configured to obtain, using the convolutional network calculation method, the second matrix vectors with respect to the at least two extracted eigenvalues in the following manner:
h.sup.n+1=a((h.sup.n).sup.T*W.sup.n+1+b.sup.n+1),
where h.sup.n+1 is the second matrix vectors that are with respect to the at least two extracted eigenvalues and obtained using the convolutional network calculation method, a is an activation function, h.sup.n=[h.sup.n,1,h.sup.n,2, . . . ,h.sup.n,i, . . . ,h.sup.n,n].sup.T, h.sup.n,n is a first matrix vector that is of the n.sup.th eigenvalue and on which an n.sup.th matrix transformation is performed, W.sup.n+1 is the (n+1).sup.th matrix in the matrix parameter, and b.sup.n+1 is the (n+1).sup.th offset.

(72) In another embodiment of the present disclosure, the status of the image characteristic in the image data includes position information of the image characteristic in the image data; and the estimation module 24 is configured to obtain the status of the image characteristic in the image data by means of estimation in the following manner:
{tilde over (y)}.sup.pst=({tilde over (h)}.sup.n).sup.T*W.sup.pst+b.sup.pst,
where {tilde over (y)}.sup.pst is the status, obtained by means of estimation, of the image characteristic in the image data, W.sub.pst is a theoretical matrix parameter, b.sup.pst is a theoretical offset, {tilde over (h)}.sub.n is obtained according to h.sup.n, and h.sup.n[h.sup.n,1,h.sup.n,2, . . . ,h.sup.n,i, . . . ,h.sup.n,n].sup.T.

(73) In another embodiment of the present disclosure, the device further includes a reliability calculation module 25, where the reliability calculation module 25 is configured to determine reliability of the status obtained by means of estimation according to the second matrix vectors obtained by means of calculation by the second matrix vector calculation module.

(74) The reliability calculation module is configured to determine the reliability of the status obtained by means of estimation in the following manner:
{tilde over (y)}.sup.cls=*((h.sup.n).sup.T*W.sup.cls+b.sup.cls),
where {tilde over (y)}.sup.cls is the determined reliability of the status obtained by means of estimation, is a function, (x)=(1+exp(x)).sup.1, W.sup.cls is a theoretical matrix parameter, and b.sup.cls is a theoretical offset.

(75) It should be noted that the device provided in this embodiment of the present disclosure may be implemented using hardware, or may be implemented in a software manner, which is not limited herein.

(76) As shown in FIG. 3, FIG. 3 is a schematic structural diagram of an image characteristic estimation device according to Embodiment 3 of the present disclosure. The device has a function of executing the foregoing embodiment. The device may use a structure of a general computer system, and the computer system may be a processor-based computer. An entity of the device includes at least one processor 31, a communications bus 32, a memory 33, and at least one communications interface 34.

(77) The processor 31 may be a general central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits that are configured to control program execution of the solutions of the present disclosure.

(78) The communications bus 32 may include a channel, over which information is transferred between the foregoing components. The communications interface 34 uses any apparatus of a transceiver type to communicate with another device or communications network, such as an Ethernet, a radio access network (RAN), or a wireless local area network (WLAN).

(79) The computer system includes one or more memories 33, which may be a read-only memory (ROM) or another type of static storage device that can store static information and instructions, a random access memory (RAM) or another type of dynamic storage device that can store information and instructions; and may also be an electrically erasable programmable read-only memory (EEPROM), a compact disc read-only memory (CD-ROM), other compact disc storage, optical disc (including a compact disc, a laser disc, an optical disc, a digital versatile disc, and a blue-ray disc, or the like) storage, and a disk storage medium, another disk storage device, or any other medium that can be used to carry or store expected program code that is in an instruction or digital structure form and that can be accessed by a computer, which, however, is not limited herein. These memories connect to the processor using the bus.

(80) The memory 33 is configured to store application program code that executes the solutions of the present disclosure, and execution thereof is controlled by the processor 31. The processor 31 is configured to execute an application program code stored in the memory 33.

(81) In a possible implementation manner, when the foregoing application program code is executed by the processor 31, the processor 31 is configured to extract at least two eigenvalues of input image data, where the eigenvalue includes at least a degree of matching between each characteristic and a corresponding template characteristic, a value of a probability that any two characteristics in the image data appear on a same position at the same time, and a value of a score of a change in a distance between two characteristics that have an association relationship; execute the following operations for each extracted eigenvalue, until execution for the extracted eigenvalues is completed: select an eigenvalue, and performing at least two matrix transformations on the eigenvalue using a pre-obtained matrix parameter, to obtain a first matrix vector corresponding to the eigenvalue; when a first matrix vector corresponding to each extracted eigenvalue is obtained, obtain second matrix vectors with respect to the at least two extracted eigenvalues using a convolutional network calculation method according to the obtained first matrix vector corresponding to each eigenvalue; and obtain a status of an image characteristic in the image data by means of estimation according to the second matrix vectors.

(82) In another embodiment of the present disclosure, the processor 31 executes selecting an eigenvalue, and performing a first matrix transformation on the eigenvalue using the pre-obtained matrix parameter, to obtain the first submatrix vector corresponding to the eigenvalue; performing a second matrix transformation on the first submatrix vector using the pre-obtained matrix parameter, to obtain the second submatrix vector corresponding to the eigenvalue; and by analogy, performing an N.sup.th matrix transformation on the (N1).sup.th submatrix vector using the pre-obtained matrix parameter, to obtain the first matrix vector corresponding to the eigenvalue, where N is a natural number.

(83) In another embodiment of the present disclosure, the processor 31 executes obtaining the first submatrix vector corresponding to the eigenvalue in the following manner:
h.sup.1,i=a(i.sup.T*W.sup.1,i+b.sup.1,i),
where h.sup.1,i represents the first submatrix vector corresponding to the i.sup.th extracted eigenvalue, a is an activation function, i.sup.T is a transposed matrix of the i.sup.th eigenvalue, W.sup.1,i is the first matrix with respect to the i.sup.th eigenvalue in the matrix parameter, and b.sup.1,i is the first offset with respect to the i.sup.th eigenvalue.

(84) In another embodiment of the present disclosure, the processor 31 executes obtaining the second submatrix vector corresponding to the eigenvalue in the following manner:
h.sup.2,i=a(h.sup.1,i).sup.T*W.sup.2,i+(b.sup.2,i).sup.T),
where h.sup.2,i represents the second submatrix vector corresponding to the i.sup.th extracted eigenvalue, a is an activation function, (h.sup.1,i).sup.T is a transposed matrix of the first submatrix vector of the i.sup.th eigenvalue, W.sup.2,i is the second matrix with respect to the i.sup.th eigenvalue in the matrix parameter, and (b.sup.2,i).sup.T is the second offset with respect to the i.sup.th eigenvalue.

(85) In another embodiment of the present disclosure, the processor 31 executes obtaining, using the convolutional network calculation method, the second matrix vectors with respect to the at least two extracted eigenvalues in the following manner:
h.sup.n+1=a((h.sup.n).sup.T*W.sup.n+1+b.sup.n+1),
where h.sup.n+1 is the second matrix vectors that are with respect to the at least two extracted eigenvalues and obtained using the convolutional network calculation method, a is an activation function, h.sup.n=[h.sup.n,1,h.sup.n,2, . . . ,h.sup.n,i, . . . ,h.sup.n,n].sup.T, h.sup.n,n is a first matrix vector that is of the n.sup.th eigenvalue and on which an n.sup.th matrix transformation is performed, W.sup.n+1 is the (n+1).sup.th matrix in the matrix parameter, and b.sup.n+1 is the (n+1).sup.th offset.

(86) In another embodiment of the present disclosure, the status of the image characteristic in the image data includes position information of the image characteristic in the image data; and the processor 31 executes obtaining the status of the image characteristic in the image data by means of estimation in the following manner:
{tilde over (y)}.sup.pst=({tilde over (h)}.sup.n).sup.T*W.sup.pst+b.sup.pst,
where {tilde over (y)}.sup.pst is the status, obtained by means of estimation, of image characteristic in the image data, W.sup.pst is a theoretical matrix parameter, b.sup.pst is a theoretical offset, {tilde over (h)}.sub.n is obtained according to h.sup.n, and h.sup.n=[h.sup.n,1,h.sup.n,2, . . . ,h.sup.n,i, . . . ,h.sup.n,n].sup.T.

(87) In another embodiment of the present disclosure, the processor 31 is further configured to execute determining reliability of the status obtained by means of estimation according to the second matrix vectors.

(88) In another embodiment of the present disclosure, the processor 31 executes determining the reliability of the status obtained by means of estimation in the following manner:
{tilde over (y)}.sup.cls=*((h.sup.n).sup.T*W.sup.cls+b.sup.cls),
where {tilde over (y)}.sup.cls is the determined reliability of the status obtained by means of estimation, is a function, (x)=(1+exp(x)).sup.1, W.sup.cls is a theoretical matrix parameter, and b.sup.cls is a theoretical offset.

(89) In this embodiment, for processing of the estimation device and a method for interaction between the device and another network element when the application program code is executed by the processor, refer to the foregoing method embodiment. Details are not described herein.

(90) The device provided in this embodiment may resolve a problem existing in the prior art that estimation accuracy is low when estimation is performed on an image characteristic.

(91) Persons skilled in the art should understand that the embodiments of the present disclosure may be provided as a method, an apparatus (device), or a computer program product. Therefore, the present disclosure may use a form of hardware only embodiments, software only embodiments, or embodiments with a combination of software and hardware. Moreover, the present disclosure may use a form of a computer program product that is implemented on one or more computer-usable storage media (including but not limited to a disk memory, a CD-ROM, an optical memory, and the like) that include computer-usable program code.

(92) The present disclosure is described with reference to the flowcharts and/or block diagrams of the method, the apparatus (device), and the computer program product according to the embodiments of the present disclosure. It should be understood that computer program instructions may be used to implement each process and/or each block in the flowcharts and/or the block diagrams and a combination of a process and/or a block in the flowcharts and/or the block diagrams. These computer program instructions may be provided for a general-purpose computer, a dedicated computer, an embedded processor, or a processor of any other programmable data processing device to generate a machine, so that the instructions executed by a computer or a processor of any other programmable data processing device generate an apparatus for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.

(93) These computer program instructions may also be stored in a computer readable memory that can instruct the computer or any other programmable data processing device to work in a specific manner, so that the instructions stored in the computer readable memory generate an artifact that includes an instruction apparatus. The instruction apparatus implements a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.

(94) These computer program instructions may also be loaded onto a computer or another programmable data processing device, so that a series of operations and steps are performed on the computer or the another programmable device, thereby generating computer-implemented processing. Therefore, the instructions executed on the computer or the another programmable device provide steps for implementing a specific function in one or more processes in the flowcharts and/or in one or more blocks in the block diagrams.

(95) Although some embodiments of the present disclosure have been described, persons skilled in the art can make changes and modifications to these embodiments once they learn the basic inventive concept. Therefore, the following claims are intended to be construed as to cover the embodiments and all changes and modifications falling within the scope of the present disclosure.

(96) Obviously, persons skilled in the art can make various modifications and variations to the present disclosure without departing from the spirit and scope of the present disclosure. The present disclosure is intended to cover these modifications and variations provided that they fall within the scope of protection defined by the following claims and their equivalent technologies.