Dynamic facial expression recognition (FER) method based on Dempster-Shafer (DS) theory

11967180 ยท 2024-04-23

Assignee

Inventors

Cpc classification

International classification

Abstract

A dynamic facial expression recognition (FER) method based on a Dempster-Shafer (DS) theory improves a feature extraction effect of an expression video through multi-feature fusion, and deeply learns an imbalanced dynamic expression feature by using the DS theory, multi-branch convolution, and an attention mechanism. Compared with other methods, the dynamic FER method scientifically and effectively reduces an impact of sample imbalance on expression recognition, fully utilizes a spatio-temporal feature to mine potential semantic information of the video expression to perform expression classification, thereby improving reliability and accuracy and meeting a demand for the expression recognition.

Claims

1. A dynamic facial expression recognition (FER) method based on a Dempster-Shafer (DS) theory, comprising the following steps: a) preprocessing video data V in a dataset, extracting last N frames of the video data V to obtain consecutive video frames, and performing face detection, alignment, and clipping operations on the video frames to obtain a facial expression image P; b) constructing a Dempster-Shafer theory Expression Recognition (DSER) network model, wherein the DSER network model comprises a same-identity inter-frame sharing module M.sub.s a space-domain attention module M .sub.att a time-domain fully connected (FC) unit V.sub.FC, a time-domain multi-layer perceptron unit V.sub.MLP, a spatio-temporal feature fusion module M.sub.st, and a discriminator D.sub.ds guided by a DS theory; c) separately inputting the facial expression image P into the same-identity inter-frame sharing module M.sub.s and the space-domain attention module M.sub.att in the DSER network model, to obtain a same-identity inter-frame shared feature F.sub.s.sup.P and a space-domain attention feature F.sub.att.sup.P, and multiplying the same-identity inter-frame shared feature F.sub.s.sup.P by the space-domain attention feature F.sub.att.sup.P to obtain a space-domain feature F.sub.satt.sup.PS; d) sequentially inputting the facial expression image P into the time-domain FC unit V.sub.FC and the time-domain multi-layer perceptron unit V.sub.MLP in the DSER network model to obtain a time-domain vector V.sub.FCMLP.sup.PT; e) inputting the space-domain feature F.sub.satt.sup.PS and the time-domain vector V.sub.FCMLP.sup.PT into the spatio-temporal feature fusion module M.sub.st in the DSER network model to obtain a spatio-temporal feature F.sub.st.sup.P; f) inputting the spatio-temporal feature F.sub.st.sup.P into the discriminator D.sub.ds guided by the DS theory in the DSER network model, to obtain a classification result R, and completing the construction of the DSER network model; g) calculating a loss function l; h) iterating the DSER network model by using the loss function l and an Adam optimizer, to obtain a trained DSER network model; and i) processing to-be-detected video data by using the step a), to obtain a facial expression image, and inputting the facial expression image into the trained DSER network model to obtain the classification result R.

2. The dynamic FER method based on the DS theory according to claim 1, wherein in the step a), last 16 frames of the video data V are extracted based on a VideoCapture class in Python to obtain consecutive video frames, face detection is performed on the consecutive video frames by using a Deformable Parts Model (DPM) algorithm, a face image of each of the consecutive video frames is extracted to obtain a continuous 16-frame face image, and face alignment and clipping are performed on the continuous 16-frame face image by using a practical expression landmark detector (PELD) algorithm, to obtain an aligned continuous 16-frame facial expression image P .

3. The dynamic FER method based on the DS theory according to claim 1, wherein the step c) comprises the following steps: c-1) constituting the same-identity inter-frame sharing module M.sub.s by a first convolution module, a second convolution module, and a third convolution module sequentially, and constituting the space-domain attention module M.sub.att by a first FC module and a second FC module sequentially; c-2) constituting the first convolution module of the same-identity inter-frame sharing module M.sub.s by a convolutional layer with a 3*3 convolution kernel and a stride of 1, a batch normalization (BN) layer, and a rectified linear unit (ReLU) activation function layer sequentially, and inputting the facial expression image P into the first convolution module to obtain a feature F.sub.s1.sup.P; c-3) constituting the second convolution module of the same-identity inter-frame sharing module M.sub.s by a downsampling module and a residual module sequentially, wherein the downsampling module comprises a first branch and a second branch, the first branch sequentially comprises a first convolutional layer with a 3*3 convolution kernel and a stride of 2, a first BN layer, a first ReLu activation function layer, a second convolutional layer with a 3*3 convolution kernel and a stride of 1, a second BN layer, and a second ReLu activation function layer, the second branch sequentially comprises a third convolutional layer with a 1*1 convolution kernel and a stride of 2, a third BN layer, and a third ReLu activation function layer, the residual module sequentially comprises a fourth convolutional layer with a 3*3 convolution kernel and a stride of 1, a fourth BN layer, a fourth ReLu activation function layer, a fifth convolutional layer with a 3*3 convolution kernel and a stride of 1, a fifth BN layer, and a fifth ReLu activation function layer; inputting the feature F.sub.s1.sup.P into the first branch of the downsampling module of the second convolution module to obtain a feature F.sub.sd2.sup.P1, and inputting the feature F.sub.sd2.sup.P into the second branch of the downsampling module of the second convolution module to obtain a feature F.sub.sd2.sup.P2; adding up the feature F.sub.sd2.sup.P1 and the feature F.sub.sd2.sup.P2 to obtain a feature F.sub.sd2.sup.P2; and inputting the feature F.sub.sd2.sup.P into the residual module of the second convolution module to obtain a feature F.sub.s2.sup.P; c-4) constituting the third convolution module of the same-identity inter-frame sharing module M.sub.s by a downsampling module and a residual module sequentially, wherein the downsampling module comprises a first branch and a second branch, the first branch sequentially comprises a first convolutional layer with a 3*3 convolution kernel and a stride of 2, a first BN layer, a first ReLu activation function layer, a second convolutional layer with a 3*3 convolution kernel and a stride of 1, a second BN layer, and a second ReLu activation function layer, the second branch sequentially comprises a third convolutional layer with a 1*1 convolution kernel and a stride of 2, a third BN layer, and a third ReLu activation function layer, the residual module sequentially comprises a fourth convolutional layer with a 3*3 convolution kernel and a stride of 1, a fourth BN layer, a fourth ReLu activation function layer, a fifth convolutional layer with a 3*3 convolution kernel and a stride of 1, a fifth BN layer, and a fifth ReLu activation function layer; inputting the feature F.sub.s2.sup.P into the first branch of the downsampling module of the third convolution module to obtain a feature F.sub.sd3.sup.P1, and inputting the feature F.sub.s1.sup.P into the second branch of the downsampling module of the third convolution module to obtain a feature F.sub.sd3.sup.P2; adding up the feature F.sub.sd3.sup.P1 and the feature F.sub.sd3.sup.P2 to obtain a feature F.sub.sd3.sup.P; and inputting the feature F.sub.sd3.sup.P into the residual module of the third convolution module to obtain a feature F.sub.s3.sup.P; c-5) calculating the same-identity inter-frame shared feature F.sub.s.sup.P according to a formula F s P = 1 1 6 .Math. i = 1 1 6 F s 3 P i , wherein in the formula, F.sub.s3.sup.Pi represents an i.sup.th video frame vector in the feature F.sub.sd3.sup.P; c-6) constituting the first FC module of the space-domain attention module M.sub.att by a BN layer, a flatten function, an FC layer, and a ReLu activation function layer sequentially, and inputting the facial expression image P into the first FC module to obtain a feature F.sub.att1.sup.P; c-7) constituting the second FC module of the space-domain attention module M.sub.att by an FC layer and a Sigmoid function layer sequentially, and inputting the feature F.sub.att1.sup.P into the second FC module to obtain the space-domain attention feature F.sub.att.sup.P; and c-8) multiplying the same-identity inter-frame shared feature F.sub.s.sup.P by the space-domain attention feature F.sub.att.sup.P to obtain the space-domain feature F.sub.satt.sup.PS.

4. The dynamic FER method based on the DS theory according to claim 1, wherein the step d) comprises the following steps: d-1) constituting the time-domain FC unit V.sub.FC by a patch partitioning module, a flatten function, an FC layer, and a ReLU activation function layer sequentially, inputting the facial expression image P into the patch partitioning module for patch partitioning to obtain a patch partitioning vector V.sub.patch.sup.P, inputting the patch partitioning vector V.sub.patch.sup.P into the flatten function to obtain a one-dimensional vector V.sub.patch1.sup.P, and sequentially inputting the one-dimensional vector V.sub.patch1.sup.P into the FC layer and the ReLU activation function layer to obtain a time-domain FC vector V.sub.FC.sup.P; and d-2) constituting the time-domain multi-layer perceptron unit V.sub.MLP by a BN layer, an FC layer, and a ReLU activation function layer, and inputting the time-domain FC vector V.sub.FC.sup.P into the time-domain multi-layer perceptron unit V.sub.MLP to obtain the time-domain vector V.sub.FCMLP.sup.PT.

5. The dynamic FER method based on the DS theory according to claim 1, wherein the step e) comprises the following step: e-1) inputting the space-domain feature F.sub.satt.sup.PS and the time-domain vector V.sub.FCMLP.sup.PT into the spatio-temporal feature fusion module M.sub.st in the DSER network model, and calculating the spatio-temporal feature F.sub.st.sup.P according to a formula F.sub.st.sup.P=F.sub.satt.sup.PS+?V.sub.FCMLP.sup.PT, wherein ? represents an adjustable hyper-parameter.

6. The dynamic FER method based on the DS theory according to claim 5, wherein ?=0.54 .

7. The dynamic FER method based on the DS theory according to claim 1, wherein the step f) comprises the following steps: f-1) constituting, by a multi-branch convolution module, an uncertainty combination module, a multi-branch fusion module, and a determining module sequentially, the discriminator D.sub.ds guided by the DS theory; f-2) constituting the multi-branch convolution module by a first branch, a second branch, and a third branch sequentially, wherein the first branch, the second branch, and the third branch each sequentially comprise a first convolutional layer with a 3*3 convolution kernel and a stride of 1, a first BN layer, a first ReLu activation function layer, a second convolutional layer with a 3*3 convolution kernel and a stride of 2, a second BN layer, a second ReLu activation function layer, an average pooling layer, a flatten function layer, and a linear layer sequentially; and inputting the spatio-temporal feature F.sub.st.sup.P into the first branch, the second branch, and the third branch of the multi-branch convolution module to obtain a first branch vector V.sub.st1.sup.P, a second branch vector V.sub.st2.sup.P, a third branch vector and V.sub.st3.sup.P respectively; f-3) inputting the first branch vector V.sub.st1.sup.P, the second branch vector V.sub.st2.sup.P and the third branch vector V.sub.st3.sup.P into the uncertainty combination module; taking an exponent with e as a base for the first branch vector V.sub.st1.sup.P to obtain a first evidence vector e.sub.1=[e.sub.1.sup.1,e.sub.1.sup.2, . . . ,e.sub.1.sup.k, . . . ,e.sub.1.sup.K]1, wherein e.sub.1.sup.k represents an i.sup.th evidence vector in the first branch vector, and k=[1, 2, . . . , K]; taking the exponent with e as the base for the second branch vector V.sub.st2.sup.P to obtain a second evidence vector e.sub.2=[e.sub.2.sup.1,e.sub.2.sup.2, . . . ,e.sub.2.sup.k, . . . ,e.sub.2.sup.K], wherein e.sub.2.sup.k represents an i.sup.th evidence vector in the second branch vector; taking the exponent with e as the base for the third branch vector V.sub.st3.sup.P to obtain a third evidence vector e.sub.3=[e.sub.3.sup.1,e.sub.3.sup.2, . . . ,e.sub.3.sup.k, . . . ,e.sub.3.sup.K], wherein e.sub.3.sup.k represents an i.sup.th evidence vector in the third branch vector, k=[1 2, . . . , k], K represents a quantity of sample categories, K=7, and values of k one-to-one correspond to numbers in a label sequence [1: surprise, 2: fear, 3: disgust, 4: happiness, 5: sadness, 6: anger, 7: neutral]; calculating a k.sup.th Dirichlet parameter ?.sub.1.sup.k of the first evidence vector e.sub.1 according to a formula ?.sub.1.sup.k=e.sub.1.sup.k+1, calculating a k.sup.th Dirichlet parameter ?.sub.2.sup.k of the second evidence vector e.sub.2 according to a formula ?.sub.2.sup.k=e.sub.2.sup.k+1, and calculating a k.sup.th Dirichlet parameter ?.sub.3.sup.k of the third evidence vector e.sub.3 according to a formula ?.sub.3.sup.k=e.sub.3.sup.k+1; obtaining Dirichlet strength S.sub.1 of the first evidence vector e.sub.1 according to a formula S.sub.1=?.sub.k=1.sup.K?.sub.1.sup.k, Dirichlet strength S.sub.2 of the second evidence vector e.sub.2 according to a formula S.sub.2=?.sub.k=1.sup.K?.sub.2.sup.k, and Dirichlet strength S.sub.3 of the third evidence vector e.sub.3 according to a formula S.sub.3=?.sub.k=1.sup.K?.sub.3.sup.k; obtaining first uncertainty u.sub.1 according to a formula u 1 = K S 1 , second uncertainty u.sub.2 according to a formula u 2 = K S 2 , and third uncertainty u.sub.3 according to a formula u 3 = K S 3 ; obtaining a first confidence coefficient b.sub.1 according to a formula b 1 = ? 1 k - 1 S 1 , a second confidence coefficient b.sub.2 according to a formula b 2 = ? 2 k - 1 S 2 , and a third confidence coefficient b.sub.3 according to a formula b 3 = ? 3 k - 1 S 3 ; calculating a first conflict factor C.sub.12 according to a formula C.sub.12=b.sub.1b.sub.2 and a second conflict factor C.sub.23 according to a formula C.sub.23=b.sub.2b.sub.3; calculating a second prefix weight w.sub.2 according to a formula w 2 = 1 1 - C 1 2 u 1 u 2 , and a third prefix weight w.sub.3 according to a formula w 3 = 1 1 - C 2 3 u 2 u 3 , wherein a first prefix weight is w.sub.1=1; and multiplying the first branch vector V.sub.st1.sup.P by the first prefix weight w.sub.1 to obtain a first weight vector V.sub.1.sup.P, multiplying the second branch vector V.sub.st2.sup.P by the second prefix weight w.sub.2 to obtain a second weight vector V.sub.2.sup.P, and multiplying the third branch vector V.sub.st3.sup.P by the third prefix weight w.sub.3 to obtain a third weight vector V.sub.3.sup.P; f-4) inputting the first weight vector V.sub.1.sup.P the second weight vector V.sub.2.sup.P and the third weight vector V.sub.3.sup.P into the multi-branch fusion module, and calculating a fusion vector V.sub.fuse.sup.P according to a formula V.sub.fuse.sup.P=V.sub.1.sup.P+V.sub.2.sup.P+V.sub.3.sup.P; and f-5) constituting the determining module by a Softmax function and a max function, inputting the fusion vector V.sub.fuse.sup.P into the Softmax function for normalization, inputting a normalized fusion vector V.sub.fuse.sup.P into the max function to obtain a subscript E.sub.k of a maximum value, wherein k=[1, 2, . . . , K], and the values of k one-to-one correspond to the numbers in the label sequence [1: surprise, 2: fear, 3: disgust, 4: happiness, 5: sadness, 6: anger, 7: neutral], and comparing the subscript E.sub.k of the maximum value with the label sequence [1: surprise, 2: fear, 3: disgust, 4: happiness, 5: sadness, 6: anger, 7: neutral] to find a corresponding expression label as a determining result R.

8. The dynamic FER method based on the DS theory according to claim 5, wherein in the step g), the loss function l is calculated according to a formula l=?l.sub.KL(E.sub.k)+l.sub.BCE(V.sub.fuse.sup.P), wherein ? represents an adjustment factor, ?=0.04, l.sub.KL(E.sub.k) represents a calculation result of a KL loss of a subscript E.sub.k, and l.sub.BCE(V.sub.fuse.sup.P) represents a calculation result of a BCE loss of a fusion vector V.sub.fuse.sup.P.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a flowchart of a method according to the present disclosure; and

(2) FIG. 2 is a structural diagram of a discriminator guided by a DS theory according to the present disclosure.

(3) FIG. 3 shows a comparison of confusion matrices of the present disclosure and Former DFER.

(4) FIG. 4 is a flowchart of the present disclosure in an example scenario.

DETAILED DESCRIPTION OF THE EMBODIMENTS

(5) The present disclosure is further described with reference to FIG. 1 and FIG. 2.

(6) As shown in FIG. 1, a dynamic FER method based on a DS theory includes the following steps: a) Video data V in a dataset is preprocessed, last N frames of the video data V are extracted to obtain consecutive video frames, and face detection, alignment, and clipping operations are performed on the video frames to obtain facial expression image P . b) A DSER network model is constructed, where the DSER network model includes same-identity inter-frame sharing module M.sub.s, space-domain attention module M.sub.att, time-domain FC unit V.sub.FC, time-domain multi-layer perceptron unit V.sub.MLP, spatio-temporal feature fusion module M.sub.st, and discriminator D.sub.ds guided by a DS theory. c) The facial expression image P is separately input into the same-identity inter-frame sharing module M.sub.s and the space-domain attention module M.sub.att in the DSER network model, to obtain same-identity inter-frame shared feature F.sub.s.sup.P and space-domain attention feature F.sub.att.sup.P, and the same-identity inter-frame shared feature F.sub.s.sup.P is multiplied by the space-domain attention feature F.sub.att.sup.P to obtain space-domain feature F.sub.satt.sup.PS. d) The facial expression image P is sequentially inputting into the time-domain FC unit V.sub.FC and the time-domain multi-layer perceptron unit V.sub.MLP in the DSER network model to obtain time-domain vector V.sub.FCMLP.sup.PTl e) The space-domain feature F.sub.satt.sup.PS and the time-domain vector V.sub.FCMLP.sup.PT are input into the spatio-temporal feature fusion module M.sub.st in the DSER network model to obtain spatio-temporal feature F.sub.st.sup.P. f) The spatio-temporal feature F.sub.st.sup.P is input into the discriminator D.sub.ds guided by the DS theory in the DSER network model, to obtain classification result R, and the construction of the DSER network model is completed. g) Loss function l is calculated. h) The DSER network model is iterated by using the loss function l and an Adam optimizer, to obtain a trained DSER network model. i) To-be-detected video data is processed by using the step a), to obtain a facial expression image, and the facial expression image is input into the trained DSER network model to obtain the classification result R.

(7) An inter-frame sharing network is used to extract a shared spatial feature from consecutive video frames, and a sharing module is added to compensate for insufficient feature learning of a minority of classes. In addition, to reduce a computational cost, a simple FC layer is used to capture long-term time information, and core semantics of a time feature are gradually discovered by introducing a multi-layer perceptron. On this basis, a fusion module is used to fuse spatial and temporal features to form a spatio-temporal feature. Finally, evidence and uncertainty are calculated based on the DS theory, and are combined based on the DS theory to ensure efficiency while maintaining good performance. The present disclosure improves a feature extraction effect of an expression video through multi-feature fusion, and deeply learns an imbalanced dynamic expression feature by using the DS theory, multi-branch convolution, and an attention mechanism. Compared with other methods, the dynamic FER method scientifically and effectively reduces an impact of sample imbalance on expression recognition, fully utilizes a spatio-temporal feature to mine potential semantic information of the video expression to perform expression classification, thereby improving reliability and accuracy and meeting a demand for the expression recognition.

Embodiment 1

(8) In the step a), last 16 frames of the video data V are extracted based on a VideoCapture class in Python to obtain consecutive video frames, face detection is performed on the consecutive video frames by using a DPM algorithm, a face image of each of the consecutive video frames is extracted to obtain a continuous 16-frame face image, and face alignment and clipping are performed on the continuous 16-frame face image by using a PELD algorithm, to obtain an aligned continuous 16-frame facial expression image P .

Embodiment 2

(9) The step c) includes the following steps: c-1) The same-identity inter-frame sharing module M.sub.s is constituted by a first convolution module, a second convolution module, and a third convolution module sequentially, and the space-domain attention module M.sub.att is constituted by a first FC module and a second FC module sequentially. c-2) The first convolution module of the same-identity inter-frame sharing module M.sub.s is constituted by a convolutional layer with a 3*3 convolution kernel and a stride of 1, a BN layer, and a ReLu activation function layer sequentially, and the facial expression image P is input into the first convolution module to obtain feature F.sub.s1.sup.P. c-3) The second convolution module of the same-identity inter-frame sharing module M.sub.s is constituted by a downsampling module and a residual module sequentially. The downsampling module includes a first branch and a second branch. The first branch sequentially includes a first convolutional layer with a 3*3 convolution kernel and a stride of 2, a first BN layer, a first ReLu activation function layer, a second convolutional layer with a 3*3 convolution kernel and a stride of 1, a second BN layer, and a second ReLu activation function layer. The second branch sequentially includes a third convolutional layer with a 1*1 convolution kernel and a stride of 2, a third BN layer, and a third ReLu activation function layer. The residual module sequentially includes a fourth convolutional layer with a 3*3 convolution kernel and a stride of 1, a fourth BN layer, a fourth ReLu activation function layer, a fifth convolutional layer with a 3*3 convolution kernel and a stride of 1, a fifth BN layer, and a fifth ReLu activation function layer. The feature F.sub.s1.sup.P is input into the first branch of the downsampling module of the second convolution module to obtain feature F.sub.sd2.sup.P1 and the feature F.sub.s1.sup.P is input into the second branch of the downsampling module of the second convolution module to obtain feature F.sub.sd2.sup.P2. The feature F.sub.sd2.sup.P1 and the feature F.sub.sd2.sup.P2 are added up to obtain feature F.sub.sd2.sup.P. The feature F.sub.sd2.sup.P2 is input into the residual module of the second convolution module to obtain feature F.sub.s2.sup.P. c-4) The third convolution module of the same-identity inter-frame sharing module M.sub.s is constituted by a downsampling module and a residual module sequentially. The downsampling module includes a first branch and a second branch. The first branch sequentially includes a first convolutional layer with a 3*3 convolution kernel and a stride of 2, a first BN layer, a first ReLu activation function layer, a second convolutional layer with a 3*3 convolution kernel and a stride of 1, a second BN layer, and a second ReLu activation function layer. The second branch sequentially includes a third convolutional layer with a 1*1 convolution kernel and a stride of 2, a third BN layer, and a third ReLu activation function layer. The residual module sequentially includes a fourth convolutional layer with a 3*3 convolution kernel and a stride of 1, a fourth BN layer, a fourth ReLu activation function layer, a fifth convolutional layer with a 3*3 convolution kernel and a stride of 1, a fifth BN layer, and a fifth ReLu activation function layer. The feature F.sub.s2.sup.P is input into the first branch of the downsampling module of the third convolution module to obtain feature F.sub.sd3.sup.P1, and the feature F.sub.s1.sup.P is input into the second branch of the downsampling module of the third convolution module to obtain feature F.sub.sd3.sup.P2. The feature F.sub.sd3.sup.P1 and the feature F.sub.sd3.sup.P2 are added up to obtain feature F.sub.sd3.sup.P. The feature F.sub.sd3.sup.P is input into the residual module of the third convolution module to obtain feature F.sub.s3.sup.P. c-5) The same-identity inter-frame shared feature F.sub.s.sup.P is calculated according to formula

(10) 0 F s P = 1 1 6 .Math. i = 1 1 6 F s 3 P i ,
where in the formula, F.sub.s3.sup.Pi represents an i.sup.th video frame vector in the feature F.sub.sd3.sup.P. c-6) The first FC module of the space-domain attention module M.sub.att is constituted by a BN layer, a flatten function, an FC layer, and a ReLu activation function layer sequentially, and the facial expression image P is input into the first FC module to obtain feature F.sub.att1.sup.P. c-7) The second FC module of the space-domain attention module M.sub.att is constituted by an FC layer and a Sigmoid function layer sequentially, and the feature F.sub.att1.sup.P is input into the second FC module to obtain the space-domain attention feature F.sub.att.sup.P. c-8) The same-identity inter-frame shared feature F.sub.s.sup.P is multiplied by the space-domain attention feature F.sub.att.sup.P to obtain the space-domain feature F.sub.satt.sup.PS.

Embodiment 3

(11) The step d) includes the following steps: d-1) The time-domain FC unit V.sub.FC is constituted by a patch partitioning module, a flatten function, an FC layer, and a ReLU activation function layer sequentially, the facial expression image P is input into the patch partitioning module to be divided into two groups (there are 24 channels in each group) along a channel dimension, patch partitioning is performed to obtain patch partitioning vector V.sub.patch.sup.P, the patch partitioning vector V.sub.patch.sup.P is input into the flatten function to obtain one-dimensional vector V.sub.patch1.sup.P and the one-dimensional vector V.sub.patch1is sequentially input into the FC layer and the ReLU activation function layer to obtain time-domain FC vector V.sub.FC.sup.P. d-2) The time-domain multi-layer perceptron unit V.sub.MLP is constituted by a BN layer, an FC layer, and a ReLU activation function layer, and the time-domain FC vector V.sub.FC.sup.P is input into the time-domain multi-layer perceptron unit V.sub.MLP to obtain the time-domain vector V.sub.FCMLP.sup.PT.

Embodiment 4

(12) The step e) includes the following steps: e-1) The space-domain feature and the time-domain vector V.sub.FCMLP.sup.PT are input into the spatio-temporal feature fusion module M.sub.st in the DSER network model, and the spatio-temporal feature F.sub.st.sup.P is calculated according to formula F.sub.st.sup.P=F.sub.satt.sup.PS+?V.sub.FCMLP.sup.PT. In the formula, ? represents an adjustable hyper-parameter.

Embodiment 5

(13) ?=0.54 .

Embodiment 6

(14) The step f) includes the following steps: f-1) As shown in FIG. 2, the discriminator D.sub.ds guided by the DS theory is constituted by a multi-branch convolution module, an uncertainty combination module, a multi-branch fusion module, and a determining module sequentially. f-2) The multi-branch convolution module is constituted by a first branch, a second branch, and a third branch sequentially. The first branch, the second branch, and the third branch each sequentially include a first convolutional layer with a 3*3 convolution kernel and a stride of 1, a first BN layer, a first ReLu activation function layer, a second convolutional layer with a 3*3 convolution kernel and a stride of 2, a second BN layer, a second ReLu activation function layer, an average pooling layer, a flatten function layer, and a linear layer sequentially. The spatio-temporal feature F.sub.st.sup.P is input into the first branch, the second branch, and the third branch of the multi-branch convolution module to obtain first branch vector V.sub.st1.sup.P, second branch vector V.sub.st2.sup.P, and third branch vector V.sub.st3.sup.P respectively. f-3) The first branch vector V.sub.st1.sup.P the second branch vector V.sub.st2.sup.P and the third branch vector V.sub.st3.sup.P are input into the uncertainty combination module. An exponent with e as a base is taken for the first branch vector V.sub.st1.sup.P to obtain first evidence vector e.sub.1=[e.sub.1.sup.1,e.sub.1.sup.2, . . . ,e.sub.1.sup.k, . . . ,e.sub.1.sup.K], where e.sub.1.sup.k represents an i.sup.th evidence vector in the first branch vector, and k={1, 2, . . . ,K}. The exponent with e as the base is taken for the second branch vector V.sub.st2.sup.P to obtain second evidence vector e.sub.2=[e.sub.2.sup.1,e.sub.2.sup.2, . . . ,e.sub.2.sup.k, . . . ,e.sub.2.sup.K], where e.sub.2.sup.k represents an i.sup.th evidence vector in the second branch vector. The exponent with e as the base is taken for the third branch vector V.sub.st3.sup.P to obtain third evidence vector e.sub.3=[e.sub.3.sup.1,e.sub.3.sup.2, . . . ,e.sub.3.sup.k, . . . ,e.sub.3.sup.K], where e.sub.3.sup.k represents an i.sup.th evidence vector in the third branch vector, k={1, 2, . . . , K}, represents a quantity of sample categories, K=7, and values of k one-to-one correspond to numbers in a label sequence [1: surprise, 2: fear, 3: disgust, 4: happiness, 5: sadness, 6: anger, 7: neutral], in other words, k=1 represents the surprise, k=2 represents the fear, k=3 represents the disgust, k=4 represents the happiness, k=5 represents the sadness, k=6 represents the anger, and k=7 represents the neutral. k.sup.th Dirichlet parameter ?.sub.1.sup.k of the first evidence vector e.sub.1 is calculated according to formula ?.sub.1.sup.k=e.sub.1.sup.k1, k.sup.th Dirichlet parameter ?.sub.2.sup.k of the second evidence vector e.sub.2 is calculated according to formula ?.sub.2.sup.k=e.sub.2.sup.k1, and k.sup.th Dirichlet parameter ?.sub.3.sup.k of the third evidence vector e.sub.3 is calculated according to formula ?.sub.3.sup.k=e.sub.3.sup.k1. Dirichlet strength S.sup.1 of the first evidence vector e.sub.1 is obtained according to formula S.sub.1=?.sub.k=1.sup.K?.sub.1.sup.k, a Dirichlet strength S.sub.2 of the second evidence vector e.sub.2 is obtained according to formula S.sub.2=?.sub.k=1.sup.K?.sub.2.sup.k, and Dirichlet strength S.sub.3 of the third evidence vector e.sub.3 is obtained according to formula S.sub.3=?.sub.k=1.sup.K?.sub.3.sup.k. First uncertainty u.sub.1 is obtained according to formula

(15) u 1 = K S 1 ,
second uncertainty u.sub.2 is obtained according to formula

(16) u 2 = K S 2 ,
and third uncertainty u.sub.3 is obtained according to formula

(17) u 3 = K S 3 .
First confidence coefficient b.sub.1 is obtained according to formula

(18) b 1 = ? 1 k - 1 S 1 ,
second confidence coefficient b.sub.2 is obtained according to formula

(19) b 2 = ? 2 k - 1 S 2 ,
and third confidence coefficient b.sub.3 is obtained according to formula

(20) b 3 = ? 3 k - 1 S 3 .
First conflict factor C.sub.12 is calculated according to formula C.sub.12=b.sub.1b.sub.2 and second conflict factor C.sub.23 is obtained according to formula C.sub.23=b.sub.2b.sub.3. Second prefix weight w.sub.2 is obtained according to formula

(21) w 2 = 1 1 - C 1 2 u 1 u 2 ,
and third prefix weight w.sub.3 is obtained according to formula

(22) w 3 = 1 1 - C 2 3 u 2 u 3 ,
where a first prefix weight is w.sub.1=1. The first branch vector V.sub.st1.sup.P is multiplied by the first prefix weight w.sub.1 to obtain first weight vector V.sub.1.sup.P, the second branch vector V.sub.st2.sup.P is multiplied by the second prefix weight w.sub.2 to obtain second weight vector V.sub.2.sup.P, and the third branch vector V.sub.st3.sup.P is multiplied by the third prefix weight w.sub.3 to obtain third weight vector V.sub.3.sup.P. f-4) The first weight vector V.sub.1.sup.P, the second weight vector V.sub.2.sup.P, and the third weight vector V.sub.3.sup.P are input into the multi-branch fusion module, and fusion vector V.sub.fuse.sup.P is calculated according to formula V.sub.fuse.sup.P=V.sub.1.sup.P+V.sub.2.sup.P+V.sub.3.sup.P. f-5) The determining module is constituted by a Softmax function and a max function. The fusion vector V.sub.fuse.sup.P is input into the Softmax function for normalization, and normalized fusion vector V.sub.fuse.sup.P is input into the max function to obtain subscript E.sub.k of a maximum value, where k={1, 2, . . . , K}, and the values of k one-to-one correspond to the numbers in the label sequence [1: surprise, 2: fear, 3: disgust, 4: happiness, 5: sadness, 6: anger, 7: neutral]. The subscript E.sub.k of the maximum value is compared with the label sequence [1: surprise, 2: fear, 3: disgust, 4: happiness, 5: sadness, 6: anger, 7: neutral] to find a corresponding expression label as determining result R .

Embodiment 7

(23) In the step g), the loss function l is calculated according to formula l=?l.sub.KL(E.sub.k)+l.sub.BCE(V.sub.fuse.sup.P) where ? represents an adjustment factor, ?=0.04, l.sub.KL(E.sub.k) represents a calculation result of a KL loss of subscript E.sub.k, and l.sub.BCE(V.sub.fuse.sup.P) represents a calculation result of a BCE loss of fusion vector V.sub.fuse.sup.P.

(24) Taking data in public dataset DFEW as an example, the following specifically describes implementations of the present disclosure.

(25) A face image and a facial expression label corresponding to the face image are obtained from the dataset DFEW, and a DSER network model is constructed. The DSER network model includes same-identity inter-frame sharing module M.sub.s , space-domain attention module M.sub.att, time-domain FC unit V.sub.FC, time-domain multi-layer perceptron unit V.sub.MLP, spatio-temporal feature fusion module M.sub.st, and discriminator D.sub.ds guided by a DS theory. Then, video data in the DFEW is preprocessed, the last N frames are extracted to obtain consecutive video frames, and face detection, alignment, and clipping operations are performed on the video frames to obtain facial expression image P .

(26) In the DSER network model, the facial expression image P is input into the same-identity inter-frame sharing module M.sub.s and the space-domain attention module M.sub.att in the DSER network model to obtain same-identity inter-frame shared feature F.sub.s.sup.P and space-domain attention feature F.sub.att.sup.P respectively, and the same-identity inter-frame shared feature F.sub.s.sup.P is multiplied by the space-domain attention feature F.sub.att.sup.P to obtain space-domain feature F.sub.satt.sup.PS. The facial expression image P is sequentially input into the time-domain FC unit V.sub.FC and the time-domain multi-layer perceptron unit V.sub.MLP in the DSER network model to obtain time-domain vector V.sub.FCMLP.sup.PT. The space-domain feature F.sub.satt.sup.PS and the time-domain vector V.sub.FCMLP.sup.PT are input into the spatio-temporal feature fusion module M.sub.st of the DSER network model to obtain spatio-temporal feature F.sub.st.sup.P. The spatio-temporal feature F.sub.st.sup.P is input into the discriminator D.sub.ds guided by the DS theory in the DSER network model, to obtain classification result R, and facial expression classification is performed.

(27) Effectiveness of the method in the present disclosure is proved by comparing the DSER network model with traditional neural network models (C3D, P3D, I3D-RGB, Resnet18+LSTM) and current mainstream neural network models (CAER Net, FAN, Former DFER) under unified experimental conditions. Comparison results are shown in Table 1. In Table 1, Params represents a parameter quantity, which is used to measure a size of the model; GFLOPs represents a quantity of floating-point operations, which is used to measure an operation speed of the model; ACC represents accuracy, which is used to measure prediction accuracy of the model; Precision represents precision, which is used to measure a ratio of correctly predicted positive categories to all samples that are predicted as positive; Recall represents a recall rate, which is used to measure a ratio of the correctly predicted positive categories to all actual positive samples; F1-score represents a weighted harmonic mean of the Precision and the Recall, which is used to measure a capability of the model for finding a positive example; weighted represents calculating an index of each label and finding a mean weighted by support (a quantity of true instances per label); and macro represents calculating an unweighted mean of each category, without considering label imbalance.

(28) The DFEW is used for training, and evaluation in the dataset is performed. Results are shown in Table 1. For the sake of fairness, the unified experimental conditions are adopted for all models that need to be compared. Finally, the method in the present disclosure outperforms latest mainstream models in terms of four evaluation indicators. It is worth noting that under the evaluation indicators precision, recall, and F-1 score, the model in the present disclosure is less than 1% ahead of existing most advanced models in a weighted state, but is about 3% ahead of the existing most advanced models in a macro (unweighted mean) state. It is believed that the indicators in the macro state are generally more affected by sample imbalance, but the model in the present disclosure is least affected by the sample imbalance and has a greater lead than a lead in the weighted state. This indicates that RS-DFER proposed in the present disclosure can effectively alleviate harm caused by the sample imbalance. In addition, the Params and the FLOPs show that the model in the present disclosure achieves a good prediction, with fewer 104.26 M Params and fewer 0.35 G FLOPs than the DSER, indicating that the model in the present disclosure is the most cost-effective.

(29) TABLE-US-00001 TABLE 1 Comparison results of the models precision recall F-1 score Method Params(.sup.M) GFLOPS ACC weighted macro weighted macro weighted macro C3D 78.79 4.87 50.39 51.26 40.74 50.93 39.29 50.27 39.54 P3D 74.43 4.83 51.37 52.44 40.88 51.83 40.72 51.32 40.18 I3D-RGB 33.48 4.53 51.84 52.58 41.83 51.78 40.39 51.65 40.41 Resnet18 + 31.53 4.55 50.67 51.45 40.23 50.75 39.90 50.54 39.48 LSTM CAER-Net 22.81 4.37 50.58 50.02 40.96 50.32 39.74 50.29 39.82 FAN 34.18 4.58 56.48 58.22 47.23 56.48 47.95 56.36 47.21 Former- 146.78 5.13 61.43 62.76 49.94 62.09 51.51 60.76 50.46 DFER Ours 42.52 4.78 62.23 63.31 51.79 62.34 54.42 61.35 52.12

(30) In order to thoroughly evaluate the method proposed in the present disclosure and compare the method with existing advanced methods, an ACC of each category in a dataset AFEW is analyzed and visually applied to a confusion matrix, as shown in FIG. 3. This is intended to achieve maximum performance with a minimum computational cost. Through an experiment, it is determined that it is optimal to use three classifiers, achieving a balance between efficiency and performance. The model in the present disclosure has same performance as the Former-DFER in most categories (happiness, sadness, neutral, and anger), and outperforms the Former-DFER in a few categories (surprise, disgust, and fear). For example, the model in the present disclosure significantly improves the recall rate for the category disgust, reaching 12.16%, almost six times that of the Former-DFER. Compared with the most advanced methods available, this improvement emphasizes effectiveness of the model in the present disclosure in better learning and identifying expressions of a minority of classes.

(31) Example scenario: Supermarket customer satisfaction survey, as shown in FIG. 4.

(32) Face collection: A device with a high-definition camera is installed at a plurality of key locations in a supermarket, such as an entrance, a checkout area, and a main shelf area. The camera can capture and record a facial expression of a customer.

(33) Data preprocessing: Video data captured by the camera needs to be preprocessed. Dlib or another face detection tool is used to extract a facial image of the customer from a video frame and adjust the facial image to appropriate resolution for algorithm processing.

(34) Privacy protection: To ensure privacy of the customer, all captured facial data should be anonymized, for example, by blurring or removing a specific personal feature.

(35) Real-time expression recognition: Preprocessed facial data is input into an expression recognition algorithm provided in the present disclosure. The expression recognition algorithm can recognize seven basic emotional states in real time: happiness, sadness, anger, fear, surprise, disgust, and neutrality.

(36) Data analysis: At the end of each day, collected expression data is integrated and analyzed. For example, it is allowed to evaluate an emotion distribution of the customer in the checkout area within a specific time period, or observe a response of the customer in a shelf area after a new product is launched.

(37) Result application:

(38) Product layout optimization: If lots of confused or dissatisfied expressions are captured in a specific shelf area, a supermarket manager may consider rearranging a product or providing a clearer identifier.

(39) Service improvement: If customers in the checkout area generally show dissatisfaction or anxiety, the manager can take a measure to improve checkout efficiency or increase checkout staff during peak hours.

(40) Marketing strategy adjustment: An emotional response of the customer to a specific promotion activity is analyzed, so as to adjust or optimize a promotion strategy.

(41) Feedback loop: Based on the collected data and an analysis result, the supermarket can make regular strategy adjustment. In addition, the expression recognition algorithm can be regularly updated and optimized to capture and interpret an emotion of the customer more accurately.

(42) The practical application scenario of the expression recognition can help the supermarket better understand a real feeling and need of the customer, thereby providing better shopping experience.

(43) Finally, it should be noted that the above descriptions are only preferred embodiments of the present disclosure, and are not intended to limit the present disclosure. Although the present disclosure has been described in detail with reference to the foregoing embodiments, those skilled in the art may still modify the technical solutions described in the foregoing embodiments, or equivalently substitute some technical features thereof. Any modification, equivalent substitution, improvement, etc. within the spirit and principles of the present disclosure shall fall within the scope of protection of the present disclosure.