DIRECTION OF ARRIVAL ESTIMATION METHOD AND DEVICE BASED ON STEERING VECTOR MATRIX RECONSTRUCTION
20250317221 ยท 2025-10-09
Assignee
Inventors
- Qiang Li (Shenzhen, CN)
- Zhenhui WANG (Shenzhen, CN)
- Lei Huang (Shenzhen, CN)
- Xinzhu CHEN (Shenzhen, CN)
- Yuhang Xiao (Shenzhen, CN)
- WEIZE SUN (SHENZHEN, CN)
- PEICHANG ZHANG (SHENZHEN, CN)
- Xiaopeng LI (Shenzhen, CN)
Cpc classification
H04B17/252
ELECTRICITY
G06F17/16
PHYSICS
International classification
G06F17/11
PHYSICS
Abstract
A DoA estimation method and device based on steering vector matrix reconstruction, related to the field of array signal processing. The method includes: obtaining an array sampling covariance matrix according to an array received signal; setting a target variable, and limiting a feasible domain of the target variable by using two operators to determine a first constraint condition; characterizing an estimation error based on the target variable and the array sampling covariance matrix, and using the characterized estimation error as a second constraint condition; establishing an initial optimization model according to a preset norm based on partial sum of singular values and constraint conditions of the target variable; determining a multivariable optimization model according to the initial optimization model; and solving the multivariable optimization model to obtain an optimal result; analyzing the optimal result to obtain a DoA of the target incident signal.
Claims
1. A direction of arrival estimation method based on steering vector matrix reconstruction, wherein the method comprises: obtaining an array sampling covariance matrix according to an array received signal; wherein the array received signal is obtained by receiving a target incident signal using an array antenna; setting a target variable, and limiting a feasible domain of the target variable by using a Hankel matrix transformation operator and a column extraction operator, and determining a first constraint condition of the target variable; the Hankel matrix transformation operator is configured to transform a vector into a Hankel matrix; the column extraction operator is configured to extract columns of a matrix; characterizing an estimation error of an array covariance matrix based on the target variable and the array sampling covariance matrix, and using the characterized estimation error as a second constraint condition of the target variable; the array covariance matrix is composed of the array sampling covariance matrix and the estimation error of the array covariance matrix; establishing an initial optimization model according to a preset norm based on partial sum of singular values and constraint conditions of the target variable; the constraint conditions comprise the first constraint condition and the second constraint condition; determining a multivariable optimization model according to the initial optimization model; solving the multivariable optimization model based on an alternating direction method of multipliers to obtain an optimal result; the optimal result comprises an optimal target variable; analyzing the optimal result to obtain a direction of arrival of the target incident signal.
2. The direction of arrival estimation method based on steering vector matrix reconstruction according to claim 1, wherein the characterizing an estimation error of an array covariance matrix based on the target variable and the array sampling covariance matrix, and using the characterized estimation error as a second constraint condition of the target variable, comprises: characterizing the array covariance matrix according to the target variable; characterizing the estimation error of the array covariance matrix according to the characterized array covariance matrix and the array sampling covariance matrix; calculating norms of the estimation error to obtain the second constraint condition of the target variable.
3. The direction of arrival estimation method based on steering vector matrix reconstruction according to claim 2, wherein the establishing an initial optimization model according to a preset norm based on partial sum of singular values and constraint conditions of the target variable, comprises: replacing the first constraint condition by the preset norm based on partial sum of singular values to obtain a first objective function; determining an expression of the initial optimization model according to the second constraint condition and the first objective function.
4. The direction of arrival estimation method based on steering vector matrix reconstruction according to claim 3, wherein the expression of the initial optimization model is: represents the Hankel matrix transformation operator;
represents the column extraction operator; p represents a series of the norms; s.t. represents the constraint condition; ().sup.H represents a conjugate transpose; {circumflex over (R)} represents the array sampling covariance matrix; I represents an identity matrix;
5. The direction of arrival estimation method based on steering vector matrix reconstruction according to claim 1, wherein the determining a multivariable optimization model according to the initial optimization model, comprises: defining a plurality of optimization variables, and characterizing the initial optimization model by using each of the optimization variables to obtain the multivariable optimization model.
6. The direction of arrival estimation method based on steering vector matrix reconstruction according to claim 5, wherein an expression of the multivariable optimization model is: represents the Hankel matrix transformation operator;
represents the column extraction operator; p represents a series of the norms; s.t. represents the constraint condition; ().sup.H represents a conjugate transpose; {dot over (R)} represents the array sampling covariance matrix; I represents an identity matrix;
7. The direction of arrival estimation method based on steering vector matrix reconstruction according to claim 5, wherein the solving the multivariable optimization model based on an alternating direction method of multipliers to obtain an optimal result, comprises: initializing each of the optimization variables and the target variable; constructing, based on the multivariable optimization model, an augmented Lagrangian function of the multivariable optimization model; performing, based on the alternating direction method of multipliers, iterative updating on each of the optimization variables and the target variable to determine a saddle point of the augmented Lagrangian function; the saddle point is the optimal result of the multivariable optimization model.
8. The direction of arrival estimation method based on steering vector matrix reconstruction according to claim 7, wherein an expression of the augmented Lagrangian function is: represents a Lagrangian function; C, M, and D.sub.i are Lagrangian multipliers,
X,Y
=tr(X.sup.HY), tr() represents a trace of a matrix; is a penalty parameter.
9. The direction of arrival estimation method based on steering vector matrix reconstruction according to claim 7, wherein the analyzing the optimal result to obtain a direction of arrival of the target incident signal comprises: obtaining an optimal first target variable from the optimal result, and determining the direction of arrival of the target incident signal according to the optimal first target variable.
10. A computer device, comprising a memory, a processor, and a computer program stored in the memory and executable on the processor, wherein the processor executes the computer program to implement steps of the direction of arrival estimation method based on steering vector matrix reconstruction as described in claim 1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0037] In order to more clearly explain the technical schemes in the embodiments of the present disclosure, brief description of the accompanying drawings needed in the embodiments are briefly introduced below. Obviously, the drawings in the following description are only some embodiments of the present disclosure, and other drawings can be obtained according to these drawings without creative labor for those skilled in the field.
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
DETAILED DESCRIPTION OF EMBODIMENTS
[0044] In order to make the purposes, technical schemes, and effects of the present disclosure more clear and definite, the present disclosure is further described in detailed below with reference to the embodiments and the accompanying drawings. It should be understood that the embodiments described herein are only parts of the embodiments, and not all embodiments. Based on the embodiments, all other embodiments obtained by those skilled in the art without creative labor still belong to the scope of protection of the present disclosure.
[0045] The purpose of the present disclosure is to provide a direction of arrival estimation method and device based on steering vector matrix reconstruction to perform DoA estimation of incident signals based on array antennas when the angular separation of the incident signals is small, thereby improving the accuracy of the DoA estimation.
[0046] In order to make the above-mentioned objects, features and advantages of the present disclosure more obvious and easier to understand, the present disclosure is further described in detail below with reference to the accompanying drawings and specific embodiments.
Embodiment 1
[0047] As shown in 0, and represents a noise power, and
0 is used as the third constraint condition. [0052] S5, determining a multivariable optimization model according to the initial optimization model, which specifically includes: defining a plurality of optimization variables, and characterizing the initial optimization model by using each of the optimization variables to obtain the multivariable optimization model. [0053] S6, solving the multivariable optimization model based on an alternating direction method of multipliers to obtain an optimal result; the optimal result includes an optimal target variable. The step S6 includes: S61, initializing each of the optimization variables and the target variable; S62, based on the multivariable optimization model, constructing an augmented Lagrangian function of the multivariable optimization model; S63, based on the alternating direction method of multipliers, performing iterative updating on each of the optimization variables and the target variable to determine a saddle point of the augmented Lagrangian function; the saddle point is the optimal result of the multivariable optimization model. [0054] S7, analyzing the optimal result to obtain a direction of arrival of the target incident signal. The step S7 includes: obtaining an optimal first target variable from the optimal result, and determining the direction of arrival of the target incident signal according to the optimal first target variable.
[0055] Furthermore, the step S1 includes:
[0056] Expressing the array sampling covariance matrix as:
where K represents a quantity of samples in a time domain, also called the number of snapshoots.
[0057] Further, considering narrowband non-correlated far-field signals L(L<M) with incident angles of .sub.1, .sub.2, . . . , .sub.L, which are received by a uniform linear array composed of M antenna elements with adjacent spacing of d. The array received signal x(t) can be modeled as:
where s(t)=[s.sub.1(t),s.sub.2(t), . . . ,s.sub.L(t)].sup.T, s(t) is the L1 dimensional vector of the spatial signal, t is the sampling time, ().sup.T represents the transpose, n(t) represents the M1 dimensional noise vector, A is the ML dimensional matrix and represents the array steering vector matrix, and the form of A is:
[0058] Where a(.sub.l) is the steering vector with an angle of .sub.l, expressed as:
where represents a wavelength of the incident signal, e is the base of the natural logarithm, j is the imaginary number sign, and d represents the adjacent spacing of antenna array elements.
[0059] The signal covariance matrix is a LL dimensional diagonal matrix, expressed as:
where E{} represents the mathematical expectation, and ().sup.H represents the conjugate transpose.
[0060] Considering an ideal case, the array covariance matrix R can be decomposed into a noise-free array covariance matrix R.sub.0 and a diagonal noise covariance matrix R.sub.N.
[0061] Furthermore, assuming that the sensor noise is zero-mean Gaussian noise that is uncorrelated in space and time, the MM dimensional noise covariance matrix R.sub.N is defined as:
where
represents the noise power, and I is the MM dimensional unit matrix. Further, the array covariance matrix R can be expressed as:
[0062] Where R.sub.0 is the noise-free array covariance matrix, which is in the form of:
where
represents a power of the l-th signal, .sub.l represents a constant, and ()* represents a conjugate operation. Obviously, R.sub.S is a real diagonal matrix, which can be decomposed into two mutually conjugate transposed diagonal matrices, namely EE.sup.H in Formula (8). The I-th column of U is equivalent to the corresponding steering vector multiplied by a constant. In the present disclosure, the matrix decomposition method corresponding to Formula (8) is called the Vandermonde Decomposition of Toeplitz matrix, where U has a Vandermonde structure.
[0063] The present disclosure needs to find a matrix U that satisfies the Vandermonde structure and the formula, but it is difficult to directly add the constraints of the Vandermonde structure to U, because a set of the Vandermonde matric is non-convex and highly nonlinear, which makes the constructed optimization problem difficult to solve. The Vandermonde structure can be realized by forcing each column of the matrix to be an exponential function. If an exponential sequence is transformed into a Hankel matrix, the rank of the Hankel matrix is 1. Since U has a Vandermonde structure, each column of U is transformed into a Hankel matrix of rank 1. Therefore, the target variable is set through step S2, and the target variable is constrained.
[0064] Furthermore, the step S2 includes:
[0065] The Hankel matrix transformation operator , which means mapping the n1 dimensional vector x to the (nm+1)m dimensional Hankel matrix
[x], that is
[x].sub.(i,j)=x.sub.i+j1, means the (i+j1)-th element of x is equal to the element of the i-th row and j-th column of
[x]. The adjoint operator
* of
, which means mapping the (nm+1)m dimensional matrix X to the n1 dimensional vector
*[X], that is [
*X].sup.k=.sub.i+j1=kX.sub.ij, means the k-th element of
*X is equal to the sum of the elements in X having the sum of the row and column subscripts equal to k+1. The column extraction operator
, which means extracting the i-th column of the matrix, that is
[X]=X.sub.(:,t). The adjoint operator
of
, which means mapping the n1 dimensional vector x to the nm dimensional matrix, is defined as:
where i represents the meaning of index and number, and i{1, . . . ,m} represents the values of i are 1, . . . , m.
[0066] Ideally, the array covariance matrix R can be decomposed into the noise-free array covariance matrix R.sub.0 and the diagonal noise covariance matrix I, where 0 and represents the noise power. Taking advantage of the fact that R.sub.0 has a rank of L and is semi-positive definite, UU.sup.H is configured to replace R.sub.0, that is R.sub.0 =UU.sup.H. At the same time, U has a Vandermonde structure, so the operators
and
are used to impose constraints on U, that is rank (
[
[U]])=1, representing the first constraint condition; where rank() represents the rank of the matrix. The problem constructed at this time is to find a U and under certain constraints, which is in the form of:
where U.sup.ML,
represents a complex set matrix, U is a ML dimensional complex matrix, Find represents finding the variables that satisfy the constraints, and s.t. is configured to limit the feasible domain of the optimization variables.
[0067] In practical applications, only the array sampling covariance matrix under finite snapshots can be obtained. Due to the existence of errors, it is obvious that {circumflex over (R)}UU.sup.H+I. In other words, the estimation error of the array covariance matrix can be used to constrain the target variable.
[0068] Furthermore, the step S3 includes:
[0069] Expressing the characterized array covariance matrix R as:
[0070] The estimation error E of the array covariance matrix E is expressed as:
where E includes the signal cross-correlation term and the noise cross-correlation term, and is not equal to 0. Further, the norm of the estimation error is calculated, and the equality constraint R=UU.sup.H+I is changed to the inequality constraint
that is, the second constraint condition of the target variable is
where
represents the Frobenius norm, and is the hyperparameter of the optimization model, and a preset size depends on the quantity of snapshots and SNR.
[0071] Furthermore, the step S4 includes:
[0072] Because the signal covariance matrix is not a diagonal matrix in the case of finite snapshots, the target variable U does not strictly have a Vandermonde structure. The data in the first column of U contains not only the components
of the first signal, but also smaller components of other signals. After transformed into a Hankel matrix and subjected to singular value decomposition, a large singular value and other smaller singular values are obtained. The size of the singular value depends on the signal power and cross-correlation terms contained in the signal sampling covariance matrix R.sub.S.
[0073] Therefore, the norm based on preset partial sum of singular values (PSSV) is:
where Y.sup.mn, is a sum symbol, and .sub.i represents the i-th value of the singular values of Y in a descending order. Making the series in the norm based on PSSV be p=1, that is Y.sub.p=1, replacing the equality constraint rank(
[
[U]])=1 with which is taken as the objective function of the optimization problem (i.e., the first objective function), allowing the rank of
[
[U]] to be non-1, and except for the largest singular value, the other singular values are non-0. Therefore, the expression of the optimization model of the problem is:
where
represents optimizing a variable to minimize an objective function; U represents a first target variable; represents a second target variable; i represents an index; L represents a quantity of the incident signals; represents the Hankel matrix transformation operator;
represents the column extraction operator; p represents a series of the norms; s.t. represents the constraint condition; ().sup.H represents a conjugate transpose; {circumflex over (R)} represents the array sampling covariance matrix; I represents an identity matrix;
represents a Frobenius norm; represents hyperparameters of the optimization model.
[0074] Furthermore, the step S5 includes:
[0075] Rewriting, in order to solve the optimization problem conveniently, Formula (13) as Formula (14), and obtaining the expression of the multivariable optimization model as follows:
where
represents optimizing a variable to minimize an objective function; U represents a first target variable; represents values of singular values; represents a second target variable; i represents an index; L represents a quantity of the incident signals; represents the Hankel matrix transformation operator;
represents the column extraction operator; p represents a series of the norms; s.t. represents the constraint condition; ().sup.H represents a conjugate transpose; {circumflex over (R)} represents the array sampling covariance matrix; I represents an identity matrix;
represents a Frobenius norm; represents hyperparameters of the optimization model; Z, V, and B.sub.i are optimization variables.
[0076] Furthermore, the step S6 includes:
[0077] Solving the above multivariable optimization problem by the Alternating Direction Method of multipliers (ADMM). Making =(B.sub.1, . . . ,B.sub.L),
=(D.sub.1, . . . ,D.sub.L), and the expression of the augmented Lagrangian function obtained according to the multivariable optimization model is:
where represents the Lagrangian function; C, M, and D.sub.i are Lagrangian multipliers,
X,Y
=tr(X.sup.HY), tr() represent the trace of the matrix; is the penalty parameter, that is, the hyperparameters of ADMM.
[0078] Furthermore, it is necessary to approximately solve problem (14) by finding a saddle point of the augmented Lagrangian function, and the saddle point is the optimal result of the multivariable optimization model. Therefore, the augmented Lagrangian function is minimized in turn through the following ADMM iteration scheme to obtain the saddle point of the augmented Lagrangian function.
[0079] Before the iterative update begins, the optimization variables need to be initialized. Perform singular value decomposition on {circumflex over (R)}, {circumflex over (R)}=OSP.sup.H, where O is a left singular vector matrix, P is a right singular vector matrix, S is a diagonal matrix, and diagonal elements of S are singular values. Set the initial value of the variable V to
[0080] Furthermore, the performing iterative updating on each of the optimization variables and the target variable to determine a saddle point of the augmented Lagrangian function, specifically includes the following steps:
(1) Updating the First Target Variable U
[0081] In order to update U, it is necessary to solve subproblem (16), which can be expressed as:
[0082] Formula (24) is a least squares problem. Defining the conjugate matrix U* of U, making the partial derivative of the objective function with respect to U* be 0 to obtain:
[0083] According to the definition of operators and
* to obtain:
where w is a vector with k-th element is the number of elements on the k-th anti-diagonal of the Hankel matrix [x], and represents the Hadamard product. Combining the definitions of
and
,
is obtained, where T is a matrix with each column is w. Simply put, the formula is equivalent to multiplying each column of U by a constant. Therefore, Formula (25) can be expressed as:
where T.sub.i is a diagonal matrix, the elements on main diagonal of which are T.sub.(i:). Y is the right side of Formula (25). Therefore, the closed-form analytical scheme for each row of U can be obtained as:
(2) Updating Optimization Variable V
[0084] Subproblem (17) can be expressed as:
[0085] Similar to updating U, the closed-form analytical scheme of V can be obtained as:
(3) Updating Optimization Variable Z
[0086] In order to update Z, problem (18) needs to be solved, which can be expressed as:
where N=U.sup.k+1(V.sup.k).sup.H+.sup.kI+1/.sup.kC.sup.k.
[0087] According to the derivation, the solve of (31) is:
where
and max() represents finding the maximum value.
(4) Updating the Second Target Variable :
[0088] To update , the following constrained least squares problem needs to be solved:
[0089] Obviously, the solve of (33) is:
where Q=Z.sup.k+1U.sup.k+1(V.sup.k+1).sup.H1/.sup.kC.sup.k, the i-th element on the main diagonal of the matrix is obtained by {}.
(5) Updating Optimization Variable B.SUB.i
[0090] In order to update B.sub.i, the following problem needs to be solved:
[0091] Defining the partial singular value thresholding (PSVT) operator
where Y, l=min(m,n);
[x]=sign(x).Math.max(|x|, 0) is a soft threshold operator, where sign(x) is a sign function, Y can be decomposed into
by singular value decomposition, U.sub.1 and V.sub.1 are the singular vectors corresponding to N largest singular value, U.sub.1 and V.sub.2 are the singular vector corresponding to the lN smallest singular value, D.sub.1=diag(.sub.1, . . . ,.sub.N,0, . . . ,0), D.sub.2=diag(0, . . . ,0,.sub.N+1, . . . , .sub.1), where diag() indicates that a diagonal matrix is constructed using the elements in the brackets. The partial singular value thresholding operator provides a global optimal scheme to problem (35), so the optimal scheme to problem (35) can be expressed as:
[0092] In the method of the present disclosure, the maximum number of iterations is set to 400. The iterative process can accelerate convergence by adjusting the hyperparameter . In the last 100 iterations, the strategy of .sup.k+1=.sup.k is adopted to increase , while the initial value of is set to .sup.0=10.sup.4, and is taken as 1.05. The iterative optimization framework of the steps of performing iterative updating on each of the optimization variables and the target variable to determine a saddle point of the augmented Lagrangian function is as follows:
(1) Initializing Variable
[0093] U, V, Z, D.sub.i, B.sub.i, .sub.i{1, . . . ,L}, C, M, , .sup.0=10.sup.4, =1.05, k=0. [0094] When k<400, executing: [0095] Updating U according to formula (28). [0096] Updating V according to formula (30). [0097] Updating Z according to formula (32). [0098] Updating according to formula (34). [0099] For i=1, . . . , L, updating B.sub.i according to formula (36). [0100] For i=1, . . . , L, updating D.sub.i according to formula (21). [0101] Updating C, M according to formula (22) and formula (23). [0102] If k>300, then updating by .sup.k+1=.sup.k. [0103] k=k+1. [0104] Outputting variables: U, V, and , where U and are the defined target variables.
[0105] Furthermore, the step S7 includes:
[0106] When the maximum number of iterations is reached, the first target variable U is obtained and output. Since each column of U mainly contains the components of a single frequency signal and smaller components of other signals, which is regarded as noise. Taking the first signal as an example, the first column of U is expressed as:
where .sub.1, .sub.2, . . . , .sub.M are the noise terms, and represents a constant. Since an amplitude of the noise term is very small, the second row in the first column is divided by the first row, which is u.sub.1(2)/u.sub.1(1) and can be expressed as:
[0107] The above method can be used to obtain an estimation result of the first signal. Similarly, by dividing the (m+1)-th term of u by the m-th term, where 1m
M1, M1 estimation results .sub.1.sub.M1 are obtained, and taking an average of the M1 estimation results to obtain the final result , so the final DoA estimation value {circumflex over ()} is:
where () represents calculating a phase of the complex number.
Beneficial Effects of the Present Disclosure
[0108] At present, the angular resolution of the gridless DoA estimation method is still not high under low SNR conditions. In actual work, targets with small angular separation are inevitable. Existing methods usually cannot successfully distinguish similar targets and have poor estimation accuracy. The present disclosure is based on an array antenna, and aims to improve the ability to distinguish targets with small angular separation and improve estimation accuracy.
Embodiment 2
[0109] The method proposed in the present disclosure is simulated and tested below. An 8-element uniform linear array is used, and a spacing between adjacent antenna elements is half the wavelength of the incident signal. First, it is necessary to normalize {circumflex over (R)}, and the normalization method is to divide {circumflex over (R)} by the maximum amplitude of all elements, which is used as the input variable of the optimization model. At the same time, the method requires prior knowledge of the number of signal sources to determine the dimension size of the variable matrix U.
[0110] In order to evaluate the angle estimation accuracy of the algorithm, Root Mean Square Error (RMSE) is defined as:
wherein N represents the number of Monte Carlo experiments, L represents the quantity of incident signals, .sub.n,l represents the l-th estimated angle of the n-th experiment, and .sub.l represents the true angle of the incident signal.
[0111] The angle of the incident signal is set to 5 and 5, the SNR is 0 dB, and the quantity of time domain samples (snapshots) is set to 50, 100, 200, 500, and 1000, respectively, and 2000 Monte Carlo experiments are performed. Attached
[0112] The angle of the incident signal is set to 5 and 5, the quantity of time domain samples is 500, the SNR is set to 10 dB, 5 dB, 0 dB, 5 dB, 10 dB, respectively, and 2000 Monte Carlo experiments are performed. Attached
[0113] The resolution performance of the technical scheme of the present disclosure is tested and evaluated below, and the technical scheme of the present disclosure is compared with MUSIC Algorithm, ESPRIT Algorithm, L1-SVD, SPA, and CMRA. The angle of the first incident signal is set to 0, the angle of the second incident signal is set to 0+, the parameter is set to 0.01, the quantity of time domain samples is 500, and the SNR is 0 dB. In the simulation experiment, the value of is changed from to, and 500 Monte Carlo experiments are performed.
[0114] In order to more intuitively evaluate the resolution performance of the algorithm, the criterion for successful resolution is defined as: for
where represents the interval between two real angles. The resolution success rate is obtained by statistical analysis of 500 Monte Carlo experiments, and
[0115] In addition, the present disclosure can also be applied to the field of incident angle estimation of phased array radars.
Embodiment 3
[0116] A computer device includes a memory, a processor, and a computer program stored in the memory and executable on the processor, where the processor executes the computer program to implement the steps of the direction of arrival estimation method based on steering vector matrix reconstruction in embodiment 1.
Embodiment 4
[0117] A computer apparatus, which may be a database, may have an internal structure diagram as shown in
[0118] Those skilled in the art can understand that all or part of the processes in the above-mentioned embodiments can be completed by instructing the relevant hardware through a computer program, and the computer program can be stored in a non-volatile computer-readable storage medium. When the computer program is executed, it can include the processes of the embodiments of the above-mentioned methods. Any reference to the memory, database or other medium used in the embodiments provided by the present disclosure can include at least one of non-volatile and volatile memory. Non-volatile memory can include read-only memory (ROM), magnetic tape, floppy disk, flash memory, optical memory, high-density embedded non-volatile memory, resistive random access memory (ReRAM), magnetoresistive random access memory (MRAM), ferroelectric random access memory (FRAM), phase change memory (PCM), graphene memory, etc. Volatile memory can include random access memory (RAM) or external cache memory, etc. As an illustration and not limitation, RAM can be in various forms, such as static random access memory (SRAM) or dynamic random access memory (DRAM). The database involved in each embodiment provided by the present disclosure may include at least one of a relational database and a non-relational database. Non-relational databases may include distributed databases based on blockchains, etc., but are not limited to this. The processor involved in each embodiment provided by the present disclosure may be a general-purpose processor, a central processing unit, a graphics processor, a digital signal processor, a programmable logic apparatus, a data processing logic apparatus based on quantum computing, etc., but are not limited to this.
[0119] The technical features of the above embodiments may be arbitrarily combined. To make the description concise, not all possible combinations of the technical features in the above embodiments are described. However, as long as there is no contradiction in the combination of these technical features, they should be considered to be within the scope of this specification.
[0120] The description uses specific embodiments to illustrate the principles and implementation methods of the present disclosure. The above embodiments are only used to help understand the method and core ideas of the present disclosure. At the same time, for those skilled in the art, according to the ideas of the present disclosure, there will be changes in the specific implementation methods and application scope. In summary, the content of the specification should not be understood as limiting the present disclosure.