LINEAR MACHINE LEARNING METHOD BASED ON DNA HYBRIDIZATION REACTION TECHNOLOGY

20250029016 ยท 2025-01-23

    Inventors

    Cpc classification

    International classification

    Abstract

    A new linear machine learning method based on DNA hybridization reaction technology, includes a machine learning training part, an algorithm part, and a testing part. This machine learning method has the ability to learn linear functions. Unlike silicon circuits, the learning algorithm is implemented through the synchronization of DNA hybridization reactions. Therefore, the calculation mode of this machine learning method is a parallel computing model, and the weights of this machine learning are obtained through training without the involvement of electronic computers. Through the method, it is possible to learn multivariable linear functions without any limitation on the number of input terms. Due to the non-negative DNA concentration, the method used a dual track model for negative data processing operations.

    Claims

    1. A linear machine learning method based on deoxyribonucleic acid (DNA) hybridization reaction technology, comprising: a training part, an algorithm part and a testing part; wherein, (1) reaction expressions of the training part are as follows: X 1 + + W 1 + .fwdarw. k 1 I + + X 1 + + W 1 + ( 1 a ) X 1 - + W 1 - .fwdarw. k 1 I + + X 1 - + W 1 - ( 1 b ) X 1 + + W 1 - .fwdarw. k 1 I - + X 1 + + W 1 - ( 1 c ) X 1 - + W 1 + .fwdarw. k 1 I - + X 1 - + W 1 + ( 1 d ) X 2 + + W 2 + .fwdarw. k 1 I + + X 2 + + W 2 + ( 2 a ) X 2 - + W 2 - .fwdarw. k 1 I + + X 2 - + W 2 - ( 2 b ) X 2 + + W 2 - .fwdarw. k 1 I - + X 2 + + W 2 - ( 2 c ) X 2 - + W 2 + .fwdarw. k 1 I - + X 2 - + W 2 + ( 2 d ) .Math. X N + + W N + .fwdarw. k 1 I + + X N + + W N + ( 3 a ) X N - + W N - .fwdarw. k 1 I + + X N - + W N - ( 3 b ) X N + + W N - .fwdarw. k 1 I - + X N + + W N - ( 3 c ) X N - + W N + .fwdarw. k 1 I - + X N - + W N + ( 3 d ) Y + + H + .fwdarw. k 1 I - + H + ( 4 a ) Y - + H - .fwdarw. k 1 I + + H - ( 4 b ) I + + C + .fwdarw. k 1 I + + C + + Y + ( 4 c ) I - + C - .fwdarw. k 1 I - + C - + Y - ( 4 d ) X 1 + + X 1 - .fwdarw. k 2 ( 5 a ) W 1 + + W 1 - .fwdarw. k 2 ( 5 b ) X 2 + + X 2 - .fwdarw. k 2 ( 5 c ) W 2 + + W 2 - .fwdarw. k 2 ( 5 d ) .Math. X i + + X i - .fwdarw. k 2 ( 6 a ) W i + + W i - .fwdarw. k 2 ( 6 b ) Y + + Y - .fwdarw. k 2 ( 6 c ) C + + C - .fwdarw. k 2 ( 6 d ) H + + H - .fwdarw. k 2 ( 6 e ) I + + I - .fwdarw. k 2 ( 6 f ) where the reaction expressions (1a) to (3d) belong to a first catalytic reaction module; the reaction expressions (4a) and (4b) belong to a second catalytic reaction module; the reaction expressions (4c) and (4d) belong to the first catalytic reaction module; the reaction expressions (5a) to (6f) belong to an annihilation reaction module; k.sub.1 and k.sub.2 represent different reaction rates; and represents a waste; X.sub.i(t)=[X.sub.i.sup.+].sub.t[X.sub.i.sup.].sub.t, i=1, 2, . . . , N, N represents the number of input items, [*].sup.t represents a concentration of a substance * at a time t, and X.sub.i represents an i-th input item; X.sub.i.sup.+ and X.sub.i.sup. respectively represent a positive part and a negative part of X.sub.i; W.sub.i(t)=[W.sub.i.sup.+].sub.t[W.sub.i.sup.].sub.t, W.sub.i represents an i-th weight corresponding to the i-th input item, W.sub.i.sup.+ and W.sub.i.sup. respectively represent a positive part and a negative part of W.sub.i; I.sup.+ and I.sup. respectively represent a positive part and a negative part of a substance I; H.sup.+ and H.sup. respectively represent a positive part and a negative part of a substance H; C.sup.+ and C.sup. respectively represent a positive part and a negative part of a substance C; supposing [H.sup.+].sub.t=[H.sup.].sub.t=1 nanomoles per liter (nM), differential equations of concentrations varying over time of I.sup.+ and I.sup. are as follows: { d [ I + ] t dt = k 1 ( [ W 1 + ] t [ X 1 + ] t + [ W 2 + ] t [ X 2 + ] t + .Math. + [ W N + ] t [ X N + ] t + [ W 1 - ] t [ X 1 - ] t + [ W 2 - ] t [ X 2 - ] t + .Math. + [ W N - ] t [ X N - ] t + [ Y - ] t [ H - ] t ) d [ I - ] t dt = k 1 ( [ W 1 + ] t [ X 1 - ] t + [ W 2 + ] t [ X 2 - ] t + .Math. + [ W N + ] t [ X N - ] t + [ W 1 - ] t [ X 1 + ] t + [ W 2 - ] t [ X 2 + ] t + .Math. + [ W N - ] t [ X N + ] t + [ Y + ] t [ H + ] t ) ( 7 ) the differential equations (7) are simplified as follows: { d [ I + ] t dt = k 1 ( .Math. i = 1 N ( [ W i + ] t [ X i + ] t + [ W i - ] t [ X i - ] t ) + [ Y - ] t [ H ] t ) d [ I - ] t dt = k 1 ( .Math. i = 1 N ( [ W t + ] t [ X i - ] t + [ W i - ] t [ X i + ] t ) + [ Y + ] t [ H + ] t ) and dI ( t ) dt = d [ I + ] t dt - d [ I - ] t dt = k 1 [ .Math. i = 1 N ( [ W i + ] t [ X i + ] t + [ W i - ] t [ X i - ] t - [ W i + ] t [ X i - ] t - [ W i - ] t [ X i + ] t ) + [ Y - ] t [ H - ] t - [ Y + ] t [ H + ] t ] supposing [H.sup.+].sub.t=[H.sup.].sub.t1 nM, and dI ( t ) dt = k 1 ( .Math. i = 1 N W i ( t ) X i ( t ) - Y ( t ) ) , when a DNA hybridization reaction network reaches dynamic equilibrium, i.e., dI ( t ) dt = 0 , then: Y = .Math. i = 1 N W i X i where Y(t)=[Y.sup.+].sub.t[Y.sup.].sub.t, Y represents an output value of a system; (2) reaction expressions of the algorithm part are as follows: D + + X 1 + .fwdarw. k 3 D + + X 1 + + W 1 + ( 8 a ) D - + X 1 - .fwdarw. k 3 D - + X 1 - + W 1 + ( 8 b ) Y + + X 1 - .fwdarw. k 3 Y + + X 1 - + W 1 + ( 8 c ) Y - + X 1 + .fwdarw. k 3 D - + X 1 + + W 1 + ( 8 d ) D + + X 1 - .fwdarw. k 3 D + + X 1 - + W 1 - ( 8 e ) D - + X 1 + .fwdarw. k 3 D - + X 1 + + W 1 - ( 8 f ) Y + + X 1 + .fwdarw. k 3 Y + + X 1 + + W 1 - ( 8 g ) Y - + X 1 - .fwdarw. k 3 D - + X 1 - + W 1 - ( 8 h ) D + + X 2 + .fwdarw. k 3 D + + X 2 + + W 2 + ( 9 a ) D - + X 2 - .fwdarw. k 3 D - + X 2 - + W 2 + ( 9 b ) Y + + X 2 + .fwdarw. k 3 Y + + X 2 - + W 2 + ( 9 c ) Y - + X 2 + .fwdarw. k 3 D - + X 2 + + W 2 + ( 9 d ) D + + X 2 - .fwdarw. k 3 D + + X 2 - + W 2 - ( 9 e ) D - + X 2 + .fwdarw. k 3 D - + X 2 + + W 2 - ( 9 f ) Y + + X 2 + .fwdarw. k 3 Y + + X 2 + + W 2 - ( 9 g ) Y - + X 2 - .fwdarw. k 3 D - + X 2 - + W 2 - ( 9 h ) .Math. D + + X N + .fwdarw. k 3 D + + X N + + W N + ( 10 a ) D - + X N - .fwdarw. k 3 D - + X N - + W N + ( 10 b ) Y + + X N - .fwdarw. k 3 Y + + X N - + W N + ( 10 c ) Y - + X N + .fwdarw. k 3 D - + X N + + W N + ( 10 d ) D + + X N - .fwdarw. k 3 D + + X N - + W N - ( 10 e ) D - + X N + .fwdarw. k 3 D - + X N + + W N - ( 10 f ) Y + + X N + .fwdarw. k 3 Y + + X N + + W N - ( 10 g ) Y - + X N + .fwdarw. k 3 D - + X N - + W N - ( 10 h ) where the reaction expressions (8) to (10) belong to the first catalytic reaction module; D.sup.+ and D.sup. respectively represent a positive part and a negative part of D; and k.sub.3 represents a reaction rate which is different from k.sub.1 and k.sub.2; differential equations of concentrations varying over time of W.sub.i.sup.+ and W.sub.i.sup. are as follows: { d [ W 1 + ] dt = k 3 ( [ D + ] t [ X 1 + ] t + [ D - ] t [ X 1 - ] t + [ Y + ] t [ X 1 - ] t + [ Y - ] t [ X 1 + ] t ) d [ W 1 - ] dt = k 3 ( [ D + ] t [ X 1 - ] t + [ D - ] t [ X 1 + ] t + [ Y + ] t [ X 1 + ] t + [ Y - ] t [ X 1 - ] t ) ( 11 ) { d [ W 2 + ] dt = k 3 ( [ D + ] t [ X 2 + ] t + [ D - ] t [ X 2 - ] t + [ Y + ] t [ X 2 - ] t + [ Y - ] t [ X 2 + ] t ) d [ W 2 - ] dt = k 3 ( [ D + ] t [ X 2 - ] t + [ D - ] t [ X 2 + ] t + [ Y + ] t [ X 2 + ] t + [ Y - ] t [ X 2 - ] t ) ( 12 ) { d [ W N + ] dt = k 3 ( [ D + ] t [ X N + ] t + [ D - ] t [ X N - ] t + [ Y + ] t [ X N - ] t + [ Y - ] t [ X N + ] t ) d [ W N - ] dt = k 3 ( [ D + ] t [ X N - ] t + [ D - ] t [ X N + ] t + [ Y + ] t [ X N + ] t + [ Y - ] t [ X N - ] t ) ( 13 ) wherein D(t)=[D.sup.+].sub.t[D.sup.].sub.t, D represents an expectation value; based on the differential equations (11) to (13), expressions are obtained as follows: dW 1 ( t ) dt = d [ W 1 + ] t dt - d [ W 1 ] t dt = k 3 ( [ D + ] t [ X 1 + ] t + [ D - ] t [ X 1 - ] t - [ D + ] t [ X 1 - ] t - [ D - ] t [ X 1 + ] t ) + k 2 ( [ Y + ] t [ X 1 - ] t + [ Y - ] t [ X 1 + ] t - [ Y + ] t [ X 1 + ] t + [ Y - ] t [ X 1 - ] t ) = k 3 [ ( [ D + ] t - [ D - ] t ) - ( [ Y + ] t - [ Y - ] t ) ] ( [ X 1 + ] t - [ X 1 - ] t ) = k 3 ( [ D ] t - [ Y ] t ) [ X 1 ] t dW 2 ( t ) dt = d [ W 2 + ] t dt - d [ W 2 ] t dt = k 3 ( [ D + ] t [ X 2 + ] t + [ D - ] t [ X 2 - ] t - [ D + ] t [ X 2 - ] t - [ D - ] t [ X 2 + ] t ) + k 2 ( [ Y + ] t [ X 2 - ] t + [ Y - ] t [ X 2 + ] t - [ Y + ] t [ X 2 + ] t + [ Y - ] t [ X 2 - ] t ) = k 3 [ ( [ D + ] t - [ D - ] t ) - ( [ Y + ] t - [ Y - ] t ) ] ( [ X 2 + ] t - [ X 2 - ] t ) = k 3 ( [ D ] t - [ Y ] t ) [ X 2 ] t .Math. dW N ( t ) dt = d [ W N + ] t dt - d [ W N - ] t dt = k 3 ( [ D + ] t [ X N + ] t + [ D - ] t [ X N - ] t - [ D + ] t [ X N - ] t - [ D - ] t [ X N * ] t ) + k 2 ( [ Y + ] t [ X N - ] t + [ Y - ] t [ X N * ] t = k 3 [ ( [ D + ] t - [ D - ] t ) - ( [ Y + ] t - [ Y - ] f ) ] ( [ X N + ] t - [ X N - ] t ) = k 3 ( [ D ] t - [ Y ] t ) [ X N ] t ; (3) reaction expressions of the testing part are as follows: X 1 + + W 1 + .fwdarw. k 1 I ^ + + X 1 + + W 1 + ( 14 a ) X 1 - + W 1 - .fwdarw. k 1 I ^ + + X 1 - + W 1 + ( 14 b ) X 1 + + W 1 - .fwdarw. k 1 I ^ - + X 1 + + W 1 - ( 14 c ) X 1 - + W 1 + .fwdarw. k 1 I ^ - + X 1 - + W 1 + ( 14 d ) X 2 + + W 2 + .fwdarw. k 1 I ^ + + X 2 + + W 2 + ( 15 a ) X 2 + + W 2 + .fwdarw. k 1 I ^ + + X 2 + + W 2 + ( 15 b ) X 2 + + W 2 - .fwdarw. k 1 I ^ - + X 2 + + W 2 - ( 15 c ) X 2 - + W 2 + .fwdarw. k 1 I ^ - + X 2 - + W 2 + ( 15 d ) .Math. X N + + W N + .fwdarw. k 1 I + + X N + + W N + ( 16 a ) X N - + W N - .fwdarw. k 1 I + + X N - + W N - ( 16 b ) X N + + W N - .fwdarw. k 1 I - + X N + + W N - ( 16 c ) X N - + W N + .fwdarw. k 1 I - + X N - + W N + ( 16 d ) Y - + H - .fwdarw. k 1 I - + H - ( 17 a ) Y - + H - .fwdarw. k 1 I - + H + ( 17 b ) I + + C + .fwdarw. k 1 I + + C + + Y + ( 17 c ) I - + C - .fwdarw. k 1 I - + C - + Y + ( 17 d ) X i + + X i - .fwdarw. k 2 ( 18 a ) W i + + W i - .fwdarw. k 2 ( 18 b ) Y + + Y - .fwdarw. k 2 ( 18 c ) C + + C - .fwdarw. k 2 ( 18 d ) H + + H - .fwdarw. k 2 ( 18 e ) I + + I - .fwdarw. k 2 ( 18 f ) where the reaction expressions (14d) to (16d) belong to the first catalytic reaction module; the reaction expressions (17a) and (17b) belong to the second catalytic reaction module; the reaction expressions (17c) and (17d) belong to the first catalytic reaction module; and the reaction expressions (18a) to (18f) belong to the annihilation reaction module; .sub.i represents a weight obtained after a plurality of rounds of learning; .sub.i.sup.+ and {tilde over (W)}.sub.i.sup. respectively represent a positive part and a negative part of .sub.i; represents an output of the testing part; .sup.+ and .sup. respectively represent a positive part and a negative part of ; , and represent substances involved in a chemical reaction in the testing part; .sup.+ and .sup. respectively represent a positive part and a negative part of ; .sup.+ and .sup. respectively represent a positive part and a negative part of ; .sup.+ and .sup. respectively represent a positive part and a negative part of ; the linear machine learning method comprises following steps S1-S3: S1, normalizing training data; wherein training of machine learning comprises a plurality of rounds of training; each of the plurality of rounds of training comprises: K sets of training data, each set of training data comprises N data, and the machine learning has N inputs and i=1, 2, 3, . . . , N; shuffling the K sets of training data to obtain another round of training data, and normalizing the another round of training data as follows: { x ^ i ( k , l ) = x i ( k , l ) / d ^ l ( k ) = d ~ l ( k ) / ( 19 ) where: { i = max ( i ) - min ( i ) = max ( [ 1 , 2 , .Math. N ] ) y ~ l ( k ) = .Math. i = 1 N w i x i ( l , k ) ; x.sub.i(k,l) represents an i-th data of a k-th set of an l-th round of training, k=1, 2, . . . , K, l=1, 2, . . . , , represents a positive adjustment parameter, .sub.i=[x.sub.i(1), x.sub.i(2), . . . , x.sub.i(K)] and {circumflex over (d)}.sub.l (k) meeting {circumflex over (x)}.sub.i(k,l)=[X.sub.i.sup.+].sub.0[X.sub.i.sup.].sub.0, and {circumflex over (d)}.sub.l(k)=[D.sup.+].sub.0[D.sup.].sub.0; S2, assessing the training of the machine learning; defining a relative error e.sub.l(k) in the l-th round of training as follows: e l ( k ) = y ~ l ( k ) - d ^ l ( k ) d ^ l ( k ) ( 20 ) where: y ^ n ( k ) = .Math. i = 1 N W ^ i i ( k ) ; assessing a result of the round of training and defining an average relative error as follows: e _ l = 1 K .Math. k = 1 K .Math. "\[LeftBracketingBar]" e l ( k ) .Math. "\[RightBracketingBar]" ( 21 ) performing the plurality of rounds of trainings, and stopping training after the average relative error reaches a target value; S3, assessing and testing of the machine learning; the testing of machine learning comprising a plurality of rounds of testing; one of the plurality of rounds of testing comprising P sets of test data; shuffling the P sets of test data to obtain another round of test data, and normalizing the another round of test data as follows: { x ^ i ( p , g ) = i ( p , g ) / d ^ g ( p ) = d ~ g ( p ) / .sub.i(p,g) represents an i-th data of a p-th set of a g-th round of testing, p=1, 2, . . . , P, g=1,2, . . . ,G; .sub.i=max(.sub.i)min(.sub.i), =max ([.sub.1, .sub.2, . . . .sub.N]), and {tilde over (d)}.sub.g(p)=.sub.i=1.sup.Nw.sub.ix.sub.i(g,p); in the g-th round of testing, defining a relative error e.sub.g (p) as follows: e g ( p ) = y ~ g ( p ) - d ^ g ( p ) d ^ l ( p ) , where: y ^ g ( p ) = .Math. i = 1 N W ^ i x ^ i ( k ) ; assessing a result of the round of training, and defining an average relative error in the testing as follows: e _ g = 1 P .Math. p = 1 P .Math. "\[LeftBracketingBar]" e g ( k ) .Math. "\[RightBracketingBar]" .

    2. The linear machine learning method as claimed in claim 1, wherein a reaction expression of the first catalytic reaction module is: X i + + W i + .fwdarw. k i I + + X i + + W i + , which is obtained by following DNA strand displacement reactions: { W i + + Ap i + q m q i Ad i + + Aq i + Ad i + + X i + .fwdarw. q m Ac i + + waste Ac i + + Ab i + .fwdarw. q m I + + X i + + W i + + waste ( 22 ) wherein I.sup.+ is catalyzed, Ap.sub.i.sup.+, Aq.sub.i.sup.+ and Ab.sub.i.sup.+ are auxiliary DNA strands and initial concentrations of the auxiliary DNA strands are C.sub.m, meeting C.sub.m[X.sub.i.sup.+].sub.0,[W.sub.i.sup.+].sub.0,[I.sup.+].sub.0; reaction rates q.sub.i and k.sub.i meet q.sub.iq.sub.m, k.sub.i=q.sub.i; and q.sub.m represents a maximum reaction rate; Ad.sub.i.sup.+, Ab.sub.i.sup.+ and Ac.sub.i.sup.+ represent generated DNA strands in the first catalytic reaction module; a reaction expression of the second catalytic reaction module is: Y + + H + .fwdarw. k i I - + H + , which is obtained by following DNA strand displacement reactions: { Y + + Am + q m q i An + + Ae + Ae + + H + .fwdarw. q m As + + waste As + + Ah + .fwdarw. q m H + + I - + waste ( 23 ) wherein I.sup. is catalyzed, Am.sup.+, An.sup.+, Ah.sup.+ and H.sup.+ are auxiliary DNA strands and initial concentrations of the auxiliary DNA strands Am.sup.+, An.sup.+ and Ah.sup.+ are respectively set [Am.sup.+].sub.0=[An.sup.+].sub.0=[Ah.sup.+].sub.0=C.sub.m, meeting C.sub.m[Y.sup.+].sub.0,[I.sup.].sub.0; an initial concentration of H.sup.+ is 1 nM; and the reaction rate q.sub.i meets q.sub.iq.sub.m, k.sub.i=q.sub.i; Ae.sup.+ and As.sup.+ represent generated DNA strands in the second catalyzation reaction module; a reaction expression of the annihilation reaction module is: W i + + W i - .fwdarw. k i , which is obtained by following DNA strand displacement reactions: { W i + + Wa i + q m q i Wb i + + Wt i + Wt i + + W i - .fwdarw. q m ( 24 ) wherein W.sub.i.sup.+ and W.sub.i.sup. are annihilated, Wa.sub.i.sup.+ and Wb.sub.i.sup. r are auxiliary DNA strands and initial concentrations of the auxiliary DNA strands are C.sub.m, meeting C.sub.m[W.sub.i.sup.+].sub.0,[W.sub.i.sup.].sub.0; and the reaction rate q.sub.i meets q.sub.iq.sub.m, k.sub.i=q.sub.i.

    3. The linear machine learning method as claimed in claim 1, wherein the first catalytic reaction module, the second catalytic reaction module and the annihilation reaction module in the training part, the algorithm part and the testing part belong to a same type of reaction module but are not a same reaction module.

    Description

    BRIEF DESCRIPTION OF DRAWINGS

    [0023] FIG. 1 illustrates a flowchart of the present disclosure.

    [0024] FIG. 2 illustrates a schematic diagram of main DNA strands displacement reaction of a first submodule of a catalytic reaction module 1 of the present disclosure.

    [0025] FIG. 3 illustrates a schematic diagram of main DNA strands displacement reaction of a second submodule of the catalytic reaction module 1 of the present disclosure.

    [0026] FIG. 4 illustrates a schematic diagram of main DNA strands displacement reaction of a third submodule of the catalytic reaction module 1 of the present disclosure.

    [0027] FIG. 5 illustrates a schematic diagram of main DNA strands displacement reaction of a fourth submodule of the catalytic reaction module 1 of the present disclosure.

    [0028] FIG. 6 illustrates a schematic diagram of main DNA strands displacement reaction of a catalytic reaction module 2 of the present disclosure.

    [0029] FIG. 7 illustrates a schematic diagram of main DNA strands displacement reaction of an annihilation reaction module of the present disclosure.

    [0030] FIG. 8 illustrated a circuit diagram of a parallel circuit of the present disclosure.

    [0031] FIG. 9 illustrated a schematic diagram of weight update trajectory of the present disclosure.

    [0032] FIG. 10 illustrated a schematic diagram of evolution of average relative error with training times of the present disclosure.

    [0033] FIG. 11 illustrated a schematic diagram of variation of training times with the number of training rounds of the present disclosure.

    [0034] FIG. 12 illustrated a schematic diagram of evolution of relative error with test data of the present disclosure.

    DETAILED DESCRIPTION OF EMBODIMENTS

    [0035] In order to clarify the purpose, technical solutions, and advantages of the embodiments of the present disclosure, a clear and complete description of the technical solutions in the embodiments of the present disclosure will be provided below. Apparently, the described embodiments are a part of the embodiments of the present disclosure, not all embodiments. The following is a detailed description of the present disclosure in conjunction with the accompanying drawings. However, it should be understood that the provision of the drawings is only for a better understanding of the present disclosure, and they should not be understood as limitations to the present disclosure.

    [0036] Specific steps are as follows.

    1. Design of Linear Machine Learning Based on Idealized Reaction.

    [0037] (1) The training part of machine learning:

    [00012] X 1 + + W 1 + .fwdarw. k 1 I + + X 1 + + W 1 + ( 1 a ) X 1 - + W 1 - .fwdarw. k 1 I + + X 1 - + W 1 - ( 1 b ) X 1 + + W 1 - .fwdarw. k 1 I + X 1 + + W 1 - ( 1 c ) X 1 - + W 1 + .fwdarw. k 1 I + X 1 - + W 1 + ( 1 d ) X 2 + + W 2 + .fwdarw. k 1 I + + X 2 + + W 2 + ( 2 a ) X 2 - + W 2 - .fwdarw. k 1 I + + X 2 - + W 2 - ( 2 b ) X 2 + + W 2 - .fwdarw. k 1 I - + X 2 + + W 2 - ( 2 c ) X 2 - + W 2 + .fwdarw. k 1 I - + X 2 - + W 2 + ( 2 d ) .Math. X N + + W N + .fwdarw. k 1 I + + X N + + W N + ( 3 a ) X N - + W N - .fwdarw. k 1 I + + X N - + W N - ( 3 b ) X N + + W N - .fwdarw. k 1 I - + X N + + W N - ( 3 c ) X N - + W N + .fwdarw. k 1 I - + X N - + W N + ( 3 d ) Y + + H + .fwdarw. k 1 I - + H + ( 4 a ) Y - + H - .fwdarw. k 1 I + + H - ( 4 b ) I + + C + .fwdarw. k 1 I + + C + + Y + ( 4 c ) I - + C - .fwdarw. k 1 I - + C - + Y + ( 4 d ) X 1 + + X 1 - .fwdarw. k 2 ( 5 a ) W 1 + + W 1 - .fwdarw. k 2 ( 5 b ) X 2 + + X 2 - .fwdarw. k 2 ( 5 c ) W 2 + + W 2 - .fwdarw. k 2 ( 5 d ) .Math. X i + + X i - .fwdarw. k 2 ( 6 a ) W i + + W i - .fwdarw. k 2 ( 6 b ) Y + + Y - .fwdarw. k 2 ( 6 c ) C + + C - .fwdarw. k 2 ( 6 d ) H + + H - .fwdarw. k 2 ( 6 e ) I + + I - .fwdarw. k 2 ( 6 f )

    [0038] Where X.sub.i(t)=[X.sub.i.sup.+].sub.t[X.sub.i.sup.].sub.t, i=1, 2, . . . , N, N represents the number of input items, [*].sub.t represents a concentration of a substance * at a time t, and X.sub.i represents an i-th input item, that is, X.sub.1, X.sub.2, . . . , X.sub.N respectively represent the first input item, the second input item, . . . , the N-th input item of the machine learning (i.e., learning machine). W.sub.i(t)=[W.sub.i.sup.+].sub.t[W.sub.i.sup.].sub.t, W.sub.i represents an i-th weight, that is, W.sub.1, W.sub.2, . . . , W.sub.N respectively represent the weight corresponding to the first input item, the weight corresponding to the second input item, . . . , the weight corresponding to the N-th input item. X.sub.i.sup.+ and X.sub.i.sup. respectively represent a positive part and a negative part of X.sub.i, specifically, X.sub.1.sup.+, X.sub.2.sup.+, . . . , X.sub.N.sup.+ respectively represent positive parts of X.sub.1, X.sub.2, . . . , X.sub.N; and X.sub.1.sup., X.sub.2.sup., . . . , X.sub.N.sup. respectively represent negative parts of X.sub.1, X.sub.2, . . . , X.sub.N. Apparently, [X.sub.i.sup.+].sub.t and [X.sub.i.sup.].sub.t respectively represent concentrations of the positive part and the negative part of X.sub.i at the time t. W.sub.i.sup.+ and W.sub.i.sup. respectively represent a positive part and a negative part of W.sub.i, specifically, W.sub.1.sup.+, W.sub.2.sup.+, . . . , W.sub.N.sup.+ respectively represent positive parts of W.sub.1, W.sub.2, . . . , W.sub.N; and W.sub.1, W.sub.2, . . . , W.sub.N respectively represent negative parts of W.sub.1, W.sub.2, . . . , W.sub.N. Apparently, [W.sub.i.sup.+].sub.t and [W.sub.i.sup.].sub.t respectively represent concentrations of the positive part and the negative part of W.sub.i at the time t. Y.sup.+ and Y.sup. respectively represent the positive part and the negative part of Y. H.sup.+ and H.sup. respectively represent the positive part and the negative part of H. C.sup.+ and C.sup. respectively represent the positive part and the negative part of C. I.sup.+ and I.sup. respectively represent the positive part and the negative part of I. k.sub.1 and k.sub.2 respectively represent different reaction rates. represents a waste, which is meaningless to the machine learning. Supposing [H.sup.+].sub.t=[H.sup.].sub.t1 nM, differential equations of concentrations varying over time of substances I.sup.+ and I.sup. are as follows:

    [00013] { d [ I + ] t d t = k 1 ( [ W 1 + ] t [ X 1 + ] t + [ W 2 + ] t [ X 2 + ] t + .Math. + [ W N + ] t [ X N + ] t + [ W 1 - ] t [ X 1 - ] t + [ W 2 - ] t [ X 2 - ] t + .Math. + [ W N - ] t [ X N - ] t + [ Y - ] t [ H - ] t ) d [ I - ] t d t = k 1 ( [ W 1 + ] t [ X 1 - ] t + [ W 2 + ] t [ X 2 - ] t + .Math. + [ W N + ] t [ X N - ] t + [ W 1 - ] t [ X 1 + ] t + [ W 2 - ] t [ X 2 + ] t + .Math. + [ W N - ] t [ X N + ] t + [ Y + ] t [ H + ] t ) ( 7 )

    [0039] Where [Y.sup.+] and [Y.sup.] respectively represent concentrations of the positive part and the negative part of Y at the time t; [I.sup.+].sub.t and [I.sup.].sup.t respectively represent concentrations of the positive part and the negative part of I at the time t; [H.sup.+].sub.t and [H.sup.].sub.t respectively represent concentrations of the positive part and the negative part of H at the time t.

    [0040] The differential equations (7) are simplified as follows:

    [00014] { d [ I + ] t d t = k 1 ( .Math. i = 1 N ( [ W i + ] t [ X i + ] t + [ W i - ] t [ X i - ] t ) + [ Y - ] t [ H - ] t ) d [ I - ] t d t = k 1 ( .Math. i = 1 N ( [ W i + ] t [ X i - ] t + [ W i - ] t [ X i + ] t ) + [ Y + ] t [ H + ] t ) and d [ I ] t d t = d [ I + ] t d t - d [ I - ] t d t = k 1 ( .Math. i = 1 N ( [ W i + ] t [ X i + ] t + [ W i - ] t [ X i - ] t - [ W i + ] t [ X i - ] t - [ W i - ] t [ X i + ] t + [ Y - ] t [ H - ] t - [ Y + ] t [ H + ] t ) )

    [0041] Due to supposing [H.sup.+].sub.t=[H.sup.].sub.t1 (nM) and

    [00015] d I ( t ) d t = k 1 ( .Math. i = 1 N W i ( t ) X i ( t ) - Y ( t ) ) .

    When DNA hybridization reaction network reach dynamic equilibrium, and it can be concluded that:

    [00016] Y ( t ) = .Math. i = 1 N W i ( t ) X i ( t ) .

    [0042] Apparently. Y(t)=[Y.sup.+].sub.t[Y.sup.].sub.t, representing an output value of the system.

    [0043] Reaction expressions of the algorithm part are as follows:

    [00017] D + + X 1 + .fwdarw. k 3 D + + X 1 + + W 1 + ( 8 a ) D - + X 1 - .fwdarw. k 3 D - + X 1 - + W 1 + ( 8 b ) Y + + X 1 - .fwdarw. k 3 Y + + X 1 - + W 1 + ( 8 c ) Y - + X 1 + .fwdarw. k 3 D - + X 1 + + W 1 + ( 8 d ) D + + X 1 - .fwdarw. k 3 D + + X 1 - + W 1 - ( 8 e ) D - + X 1 + .fwdarw. k 3 D - + X 1 + + W 1 - ( 8 f ) Y + + X 1 + .fwdarw. k 3 Y + + X 1 + + W 1 - ( 8 g ) Y - + X 1 - .fwdarw. k 3 D - + X 1 - + W 1 - ( 8 h ) D + + X 2 + .fwdarw. k 3 D + + X 2 + + W 2 + ( 9 a ) D - + X 2 - .fwdarw. k 3 D - + X 2 - + W 2 + ( 9 b ) Y + + X 2 - .fwdarw. k 3 Y + + X 2 - + W 2 + ( 9 c ) Y - + X 2 + .fwdarw. k 3 D - + X 2 + + W 2 + ( 9 d ) D + + X 2 - .fwdarw. k 3 D + + X 2 - + W 2 - ( 9 e ) D - + X 2 + .fwdarw. k 3 D - + X 2 + + W 2 - ( 9 f ) Y + + X 2 + .fwdarw. k 3 Y + + X 2 + + W 2 - ( 9 g ) Y - + X 2 - .fwdarw. k 3 D - + X 2 - + W 2 - ( 9 h ) .Math. D + + X N + .fwdarw. k 3 D + + X N + + W N + ( 10 a ) D - + X N - .fwdarw. k 3 D - + X N - + W N + ( 10 b ) Y + + X N - .fwdarw. k 3 Y + + X N - + W N + ( 10 c ) Y - + X N + .fwdarw. k 3 D - + X N + + W N + ( 10 d ) D + + X N - .fwdarw. k 3 D + + X N - + W N - ( 10 e ) D - + X N + .fwdarw. k 3 D - + X N + + W N - ( 10 f ) Y + + X N + .fwdarw. k 3 Y + + X N + + W N - ( 10 g ) Y - + X N + .fwdarw. k 3 D - + X N - + W N - ( 10 h )

    [0044] Where D.sup.+ and D.sup. respectively represent a positive part and a negative part of D; and k.sub.3 represent a reaction rate which is different from k.sub.1 and k.sub.2.

    [0045] Differential equations of concentrations varying over time of substances W.sub.i.sup.+ and W.sub.i.sup. are:

    [00018] { d [ W 1 + ] d t = k 3 ( [ D + ] t [ X 1 + ] t + [ D - ] t [ X 1 - ] t + [ Y + ] t [ X 1 - ] t + [ Y - ] t [ X 1 + ] t ) d [ W 1 - ] d t = k 3 ( [ D + ] t [ X 1 - ] t + [ D - ] t [ X 1 + ] t + [ Y + ] t [ X 1 + ] t + [ Y - ] t [ X 1 - ] t ) ( 11 ) { d [ W 2 + ] d t = k 3 ( [ D + ] t [ X 2 + ] t + [ D - ] t [ X 2 - ] t + [ Y + ] t [ X 2 - ] t + [ Y - ] t [ X 2 + ] t ) d [ W 2 - ] d t = k 3 ( [ D + ] t [ X 2 - ] t + [ D - ] t [ X 2 + ] t + [ Y + ] t [ X 2 + ] t + [ Y - ] t [ X 2 - ] t ) ( 12 ) .Math. { d [ W N + ] d t = k 3 ( [ D + ] t [ X N + ] t + [ D - ] t [ X N - ] t + [ Y + ] t [ X N - ] t + [ Y - ] t [ X N + ] t ) d [ W N - ] d t = k 3 ( [ D + ] t [ X N - ] t + [ D - ] t [ X N + ] t + [ Y + ] t [ X N + ] t + [ Y - ] t [ X N - ] t ) ( 13 )

    [0046] Where D(t)=[D.sup.+].sub.t[D.sup.].sub.t represents an expectation value; and [D.sup.+].sub.t and [D.sup.].sub.t respectively represent concentrations of the positive part and the negative part of D at the time t.

    [0047] Concluding from the differential equations (11) to (13), what is obtained is:

    [00019] d W 1 ( t ) dt = d [ W 1 + ] t dt - d [ W 1 - ] t dt = k 3 ( [ D + ] t [ X 1 + ] t + [ D - ] t [ X 1 - ] t - [ D + ] t [ X 1 - ] t - [ D - ] t [ X 1 + ] t ) + k 2 ( [ Y + ] t [ X 1 - ] t + [ Y - ] t [ X 1 + ] t - [ Y + ] t [ X 1 + ] t + [ Y - ] t [ X 1 - ] t ) = k 3 [ ( [ D + ] t - [ D - ] t ) - ( [ Y + ] t - [ Y - ] t ) ] ( [ X 1 + ] t - [ X 1 - ] t ) = k 3 ( [ D ] t - [ Y ] t ) [ X 1 ] t d W 2 ( t ) dt = d [ W 2 + ] t dt - d [ W 2 - ] t dt = k 3 ( [ D + ] t [ X 2 + ] t + [ D - ] t [ X 2 - ] t - [ D + ] t [ X 2 - ] t - [ D - ] t [ X 2 * ] t ) + k 2 ( [ Y + ] t [ X 2 - ] t + [ Y - ] t [ X 2 + ] t - [ Y + ] t [ X 2 + ] t + [ Y - ] t [ X 2 - ] t ) = k 3 [ ( [ D + ] t - [ D - ] t ) - ( [ Y + ] t - [ Y - ] t ) ] ( [ X 2 + ] t - [ X 2 - ] t ) = k 3 ( [ D ] t - [ Y ] t ) [ X 2 ] t .Math. dW N ( t ) dt = d [ W N + ] t dt - d [ W N - ] t dt = k 3 ( [ D + ] t [ X N + ] t + [ D - ] t [ X N - ] t - [ D + ] t [ X N - ] t - [ D - ] t [ X N * ] t ) + k 2 ( [ Y + ] t [ X N - ] t + [ Y - ] t [ X N * ] t = k 3 [ ( [ D + ] t - [ D - ] t ) - ( [ Y + ] t - [ Y - ] t ) ] ( [ X N + ] t - [ X N - ] t ) = k 3 ( [ D ] t - [ Y ] t ) [ X N ] t .

    [0048] Reaction expressions of the testing part of machine learning are as follows:

    [00020] X 1 + + W ^ 1 + .Math. k 1 I ^ + + X 1 + + W ^ 1 + ( 14 a ) X 1 - + W ^ 1 - .Math. k 1 I ^ + + X 1 - + W ^ 1 + ( 14 b ) X 1 + + W ^ 1 - .Math. k 1 I ^ - + X 1 + + W ^ 1 - ( 14 c ) X 1 - + W ^ 1 + .Math. k 1 I ^ - + X 1 - + W ^ 1 + ( 14 d ) X 2 + + W ^ 2 + .Math. k 1 I ^ + + X 2 + + W ^ 2 + ( 15 a ) X 2 + + W ^ 2 + .Math. k 1 I ^ + + X 2 + + W ^ 2 + ( 15 b ) X 2 + + W ^ 2 - .Math. k 1 I ^ - + X 2 + + W ^ 2 - ( 15 c ) X 2 - + W ^ 2 + .Math. k 1 I ^ - + X 2 - + W ^ 2 + ( 15 d ) .Math. X N + + W ^ N + .Math. k 1 I ^ + + X N + + W ^ N + ( 16 a ) X N - + W ^ N - .Math. k 1 I ^ + + X N - + W ^ N - ( 16 b ) X N + + W ^ N - .Math. k 1 I ^ - + X N + + W ^ N - ( 16 c ) X N - + W ^ N + .Math. k 1 I ^ - + X N - + W ^ N + ( 16 d ) Y ^ + + H ^ + .Math. k 1 I ^ - + H ^ + ( 17 a ) Y ^ - + H ^ - .Math. k 1 I ^ + + H ^ - ( 17 b ) I ^ + + C ^ + .Math. k 1 I ^ + + C ^ + + Y ^ + ( 17 c ) I ^ - + C ^ - .Math. k 1 I ^ - + C ^ - + Y ^ + ( 17 d ) X i + + X i - .Math. k 2 ( 18 a ) W ^ i + + W ^ i - .Math. k 2 ( 18 b ) Y ^ + + Y ^ - .Math. k 2 ( 18 c ) C ^ + + C ^ - .Math. k 2 ( 18 d ) H ^ + + H ^ - .Math. k 2 ( 18 e ) I ^ + + I ^ - .Math. k 2 ( 18 f )

    [0049] Where .sub.i(t)=[.sub.i.sup.+].sub.t[.sub.i.sup.].sub.t .sub.i represents a weight obtained after the rounds of training, that is, .sub.1, .sub.2, . . . , .sub.N represent weights obtained after the rounds of learning, which are also weights used in the testing part. .sub.i.sup.+ and .sub.i.sup. respectively represent the positive part and the negative part of .sub.i. represents an output of the testing part; .sup.+ and .sup. respectively represent the positive part and the negative part of ; , and represent substances involved in chemical reaction in the testing part; .sup.+ and .sup. respectively represent the positive part and the negative part of ; .sup.+ and .sup. respectively represent the positive part and the negative part of ; .sup.+ and .sup. respectively represent the positive part and the negative part of .

    [0050] Concluded from the reaction equations (14a) to (18f), differential equations of concentrations varying over time of substances .sup.+ and .sup. are:

    [00021] { d [ I ^ + ] t dt = k 1 ( .Math. i = 1 N ( [ W ^ i + ] t [ X i + ] t + [ W ^ i - ] t [ X i - ] t ) + [ Y ^ - ] t [ H - ] t ) d [ I ^ - ] t dt = k 1 ( .Math. i = 1 N ( [ W i + ] t [ X i - ] t + [ W ^ i - ] t [ X i + ] t ) + [ Y ^ + ] t [ H + ] t )

    [0051] What can be concluded from above differential equations is:

    [00022] d I ^ dt = d [ I ^ + ] t dt - d [ I ^ - ] t dt = k 1 ( .Math. i = 1 N ( [ W i + ] t [ X i + ] t + [ W i - ] t [ X i - ] t - [ W i + ] t [ X i - ] t + [ W i - ] t [ X i + ] t ) - ( [ ^ + ] t [ H + ] t - [ ^ ] t [ H - ] t ) )

    [0052] When [.sup.+].sub.t=[.sup.].sup.t1 nM, what can be obtained is:

    [00023] d I ^ dt = k 1 ( .Math. i = 1 N W i X i - Y ) .

    [0053] When .sup.+ and .sup. reach dynamic equilibrium,

    [00024] df dt = 0 ,

    then:

    [00025] Y = .Math. i = 1 N W ~ i X i .

    [0054] Where (t)=[.sup.+].sub.t[.sup.].sub.t, represents the output of machine learning.

    [0055] It should be noted that the catalytic reaction module 1, the catalytic reaction module 2, and the annihilation reaction module in the training part, the algorithm part, and the testing part belong to a same type of reaction module, but do not belong to a same reaction module. For example, from differential equations of concentrations varying over time of .sup.+ and .sup., it can be seen that the symbol has a subscript i below it. When i changes, the DNA molecules undergo changes accordingly, which means that all the reaction modules provided in the present disclosure do not refer to any particular one reaction, but describe one type of reaction.

    [0056] The catalytic reaction module 1 (catalytic reaction module 2) in the training and testing parts belong to the same type of the catalytic reaction module 1 (catalytic reaction module 2). The identification symbols of the catalytic reaction module 1 (catalytic reaction module 2) in the training and testing parts are different, suggesting different DNA molecules are used, and the identification symbols in the testing part have pointed caps (i.e., {circumflex over ()}), while those in the training part does not.

    2. Implementing Linear Machine Learning by Using DNA Molecular Circuits.

    [0057] Due to the fact that reactants and products in idealized reactions are abstract substances rather than specific biochemical substances, while DNA hybridization reactions can achieve any type of idealized reaction. The DNA hybridization reactions are utilized to implement the linear learning machine.

    [0058] The reaction equations (1) to (18) can be classified into several types of reactions: as shown in FIG. 2, the reaction equations (1a) to (3d) and (13a) to (15d) belong to the submodule 1 of the catalytic reaction module 1. As illustrated in FIG. 3, the reaction equations (4c), (4d), (17c), and (17d) belong to a first type of catalytic reaction, which can be achieved by the submodule 2 of the catalytic reaction module 1. As illustrated in FIG. 4, the reaction equations (8c), (8d), (8g), (8h), (9c), (9d), (9g), (9h), (10c), (10d), (10g), and (10h) belong to the first type of catalytic reaction, which can be achieved by the submodule 3 of the catalytic reaction module 1. As illustrated in FIG. 5, the reaction equations (8a), (8b), (8e), (8f), (9a), (9b), (9e), (9f), (10a), (10b), (10e), and (10f) belong to the first type of catalytic reaction, which can be achieved by the submodule 4 of the catalytic reaction module 1. As illustrated in FIG. 6, the reaction equations (4a), (4b), (17a) and (17b) belong to a second type of catalytic reaction. The reaction equations (5a) to (6f) and (18a) to (18f) belong to the annihilation reaction type and can be achieved through the annihilation reaction module. Due to the homogeneity and cascading nature of these reaction modules, they can be cascaded into DNA molecular circuits to achieve the machine learning system. The three types of DNA reaction modules are described as follows:

    (1) The Catalytic Reaction Module 1

    [0059] The idealized reaction equation for the catalytic reaction module 1 is

    [00026] X i + + W i + .Math. k i I + + X i + + W i + ,

    which is obtained by following DNA strand displacement reactions:

    [00027] { W i + + Ap i + q m q i Ad i + + Aq i + Ad i + + X i + .Math. q m Ac i + + waste Ac i + + Ab i + .Math. q m I + + X i + + W i + + waste ( 19 )

    [0060] Where I.sup.+ is catalyzed, X.sub.i represents an input signal DNA molecule, and W.sub.i.sup.+ represents a weight reporting strand. Ap.sub.i.sup.+, Aq.sub.i.sup.+ and Ab.sub.i.sup.+ are auxiliary DNA strands and initial concentrations of the auxiliary DNA strands are C.sub.m, meeting C.sub.m[X.sub.i.sup.30].sub.0, [W.sub.i.sup.+].sub.0, [I.sup.+].sub.0. A reaction rate q.sub.i and k.sub.i meet q.sub.iq.sub.m, k.sub.i=q.sub.i, and q.sub.m represents a maximum reaction rate. The DNA implementation of the submodule 1 of the catalytic reaction module 1 is illustrated in FIG. 2. As illustrated in FIGS. 3, 4, 5, the DNA implementation process of the submodules 2-4 of the catalytic reaction module 1 is similar to the submodule 1.

    (2) The Catalytic Reaction Module 2

    [0061] The idealized reaction equation of the catalytic reaction module 2 is:

    [00028] Y + + H + .Math. k i I - + H + ,

    which is obtained by following DNA strand displacement reactions:

    [00029] { Y + + Am + q m q i An + + Ae + Ae + + H + .Math. q m As + + waste As + + Ah + .Math. q m H + + I - + waste ( 20 )

    [0062] Where I.sup. is catalyzed, Am.sup.+, An.sup.+, Ah.sup.+ and H.sup.+ are auxiliary DNA strands and initial concentrations of the auxiliary DNA strands Am.sup.+, An.sup.+ and Ah.sup.+ are respectively set as [Am.sup.+].sub.0=[An.sup.+].sub.0=[Ah.sup.+].sub.0=C.sub.m, meeting C.sub.m[Y.sup.+].sub.0,[I.sup.].sub.0. An initial concentration of H.sup.+ is 1 nM. The reaction rate q.sub.i meets q.sub.iq.sub.m, k.sub.i=q.sub.i. The DNA implementation of the catalytic reaction module 2 is illustrated in FIG. 6.

    (3) The Annihilation Reaction Module

    [0063] The idealized reaction equation of the annihilation reaction module is:

    [00030] W i + + W i - .Math. k i ,

    which is obtained by following DNA strand displacement reactions:

    [00031] { W i + + Wa i + q m q i Wb i + + Wt i + Wt i + + W i - .Math. q m ( 21 )

    [0064] Where W.sub.i.sup.+ and W.sub.i.sup. are annihilated, Wa.sub.i.sup.+ and Wb.sub.i.sup.+ are auxiliary DNA strands and initial concentrations of the auxiliary DNA strands are C.sub.m, meeting C.sub.m[W.sub.i.sup.+].sub.0, [W.sub.i.sup.].sub.0. The reaction rate q.sub.i meets q.sub.iq.sub.m, k.sub.i=q.sub.i. The DNA implementation of the annihilation reaction module is illustrated in FIG. 7.

    3. Training of linear machine learning.

    [0065] This section belongs to the application of the machine learning system. The molecular learning machine has the ability to predict the relationship between the total current and voltage of parallel circuits, which is obtained through data training. Therefore, a third part is the training of the machine learning system. In order to test the learning ability of the molecular learning machine, it is necessary to test the molecular learning machine.

    [0066] As illustrated in FIG. 8, the resistance value of the sliding rheostat is adjusted to measure the voltages U.sub.1, U.sub.2 and the total current I. The two voltage values and one total current are used as a set of training data. By adjusting the sliding rheostat, another set of training data can be obtained. These obtained training data are inputted into the DNA molecular machine learning system. Through the processing of the DNA molecular learning machine algorithm, weights are updated and the relative error between the output of the DNA molecular learning machine and the expected value is calculated. When the relative error reaches or is lower than the set value of 0.2, the training goal is achieved and the training is stopped. The weights obtained through training are the value of w.sub.1, w.sub.2, . . . w.sub.N of the linear function relationship I(U.sub.1, U.sub.2, . . . . U.sub.N)=w.sub.1U.sub.1+w.sub.2U.sub.2+ . . . +w.sub.NU.sub.N. The functional relationship between them can be fitted by using the voltage and current values when the value of w.sub.1, w.sub.2, . . . w.sub.N are obtained. These parameter values correspond to the reciprocals of the fixed resistance (the relationship between partial voltage and total current in a parallel circuit is

    [00032] I = U 1 R 1 + U 2 R 2 + .Math. + U 3 R 3 .

    Therefore, the DNA molecule learning machine can predict the partial resistance value R.sub.1, R.sub.2, . . . R.sub.N.

    [0067] The present disclosure utilizes a DNA linear learning machine to learn the relationship between the total current and voltages of a parallel circuit I(U.sub.1, U.sub.2, . . . U.sub.N)=w.sub.1U.sub.1+w.sub.2U.sub.2+ . . . +w.sub.NU.sub.N. The weights w.sub.i and inputs U.sub.i(i=1, 2, . . . N) are both real numbers. As the weights and inputs are represented by the concentration of DNA strands, w.sub.i, U.sub.i, I0.

    (1) Training of Linear Machine Learning.

    [0068] The training of machine learning includes multiple rounds of training. One round of training includes M groups of training data, each group of which is composed of N data, that is, the machine learning has N inputs and i=1, 2, . . . , N. The training data set is shuffled to obtain another round of training data. In the first round of training data, .sub.i=[.sub.i(1,1), .sub.i(2,1), . . . , .sub.i(M, 1)] and ={{tilde over (d)}.sub.1(1), {tilde over (d)}.sub.1(2), . . . , {tilde over (d)}.sub.1 (M)} respectively represent partial voltage and total current. The data can be normalized as follows:

    [00033] { x i ( k , l ) = i ( k , l ) / d l ( k ) = d l ( k ) / ( 22 ) where { i = max ( i ) - min ( i ) = max ( [ 1 , 2 , .Math. N ] ) d l ( k ) = .Math. i = 1 N w i x i ( l , k ) .

    [0069] .sub.i(k,l) represents the i-th data of the k-th group of data in the l-th round of training, k=1, 2, . . . , M, l=1, 2, . . . , and are positive adjustment parameters. .sub.i=[x.sub.i(1), x.sub.i(2), . . . , x.sub.i(K)]. {circumflex over (x)}.sub.i(k,l) and {circumflex over (d)}.sub.l (k) are the initial concentration setting of the input signal DNA strand for machine learning. The first round of training data meets {circumflex over (x)}1 (k,l)=[X.sub.ik.sup.+].sub.0[X.sub.ik.sup.].sub.0 and d.sub.1(k)=[D.sub.k.sup.+].sub.0[D.sub.k.sup.].sub.0, shuffling the order can obtain training data for other rounds.

    (2) Assessment of Training of Linear Machine Learning.

    [0070] In the l-th training, a relative error e.sub.l(k) is defined as follows.

    [00034] e l ( k ) = D l ( k ) - d l ( k ) d l ( k ) ( 23 ) where { D l ( k ) = ( Y 1 ( k , l ) ) Y 1 ( k , l ) = .Math. n = 1 L V n 1 ( l ) y n ( k , l ) y n ( k , l ) = ( S n ( k , l ) ) S n ( k , l ) = .Math. i = 1 N W in ( l ) x i ( k , l )

    [0071] .sub.in (l) and {circumflex over (V)}.sub.n1 (l) represent the weights of the input layer and hidden layer obtained after the l-th training.

    [0072] To assess the training results, the average relative error is defined as follows:

    [00035] e l = 1 K .Math. k = 1 K .Math. "\[LeftBracketingBar]" e l ( k ) .Math. "\[RightBracketingBar]" ( 24 )

    [0073] After multiple trainings, when the average relative error reaches the target value, the training will be stopped.

    [0074] As illustrated in FIG. 8, a nonlinear neural network with two input nodes is taken as an example to illustrate the training and assessment of the relationship between voltage and total current in a parallel circuit containing two fixed resistors based on DNA strand displacement reaction neural network.

    [0075] Raw training data is U.sub.1{1.2V, 1.4V, 1.6V, . . . , 5.0V}, U.sub.2{1.3V, 1.5V, 1.7V, . . . , 5.1V} and I{0.37A, 0.43A, 0.49A, . . . , 1.51A} and there are 20 sets of data in total. The value of is that =3. Initial concentrations of DNA strands X.sub.1.sup.+, X.sub.1.sup., X.sub.2.sup.+ and X.sub.2.sup. are set as [X.sub.1.sup.+].sub.0{2.2632 nM, 2.4211 nM, 2.5789 nM, . . . , 5.2632 nM} [X.sub.1.sup.].sub.0{1.6316 nM, 1.6842 nM, 1.7368 nM, . . . , 2.6316 nM}, [X.sub.2.sup.+].sub.0{2.3158 nM, 2.4737 nM, 2.6316 nM, . . . , 5.3158 nM}, [X.sub.2.sup.].sub.0{1.6316 nM, 1.6842 nM, 1.7368 nM, . . . , 2.6316 nM} Initial concentration of auxiliary DNA molecules and reaction rates are set in Table 1.

    [0076] FIG. 9 illustrated the update trajectory of weights during 20 rounds of training. It is evident that after training, the endpoint value of weights is very close to the target value, indicating that the linear learning machine has good learning ability.

    [0077] As illustrated in FIG. 10, in 20 rounds of training, the average relative error is all around 0.2, and the training goal is basically achieved.

    [0078] FIG. 11 illustrated a total number of training times required to achieve the training objectives in 20 rounds of training, which is clearly 4.

    TABLE-US-00001 TABLE 1 concentrations of DNA strands and reaction rate settings Reaction rate Value Concentration Value q.sub.1 0.01/nM/s [I.sup.+].sub.0 2 nM q.sub.2 0.001/nM/s [I.sup.].sub.0 2 nM q.sub.3 0.001/nM/s [Y.sup.+].sub.0 20 nM q.sub.m 1.0/nM/s [Y.sup.].sub.0 20 nM [C.sup.+].sub.0 20 nM [C.sup.].sub.0 20 nM [H.sup.+].sub.0 1 nM [H.sup.].sub.0 1 nM [.sup.+].sub.0 2 nM [.sup.].sub.0 2 nM [.sup.+].sub.0 100 nM [.sup.].sub.0 100 nM [.sup.+].sub.0 200 nM [.sup.].sub.0 200 nM C.sub.m 2000 nM

    (3) Testing and Assessment of Linear Machine Learning.

    [0079] The testing of machine learning includes multiple rounds of testing. One round of testing includes P sets of training data. The test data sets are shuffled to obtain another round of test data. In the first round of testing, the partial resistance and total voltage values are represented by .sub.i=[.sub.i(1,1), .sub.i(2,1), . . . , B.sub.i(P,1)] and ={{tilde over (d)}.sub.1(1), {tilde over (d)}.sub.1(2), . . . , {tilde over (d)}.sub.1(M)} respectively. The data can be normalized as follows.

    [00036] { x i ( p , g ) = i ( p , g ) / d g ( p ) = d g ( p ) / ( 25 )

    [0080] Where .sub.i(p,g) represents an i-th data of a p-th set of data in a g-th round of training. p=1, 2, . . . , P, g=1, 2, . . . , G, .sub.i=max(.sub.i)min(.sub.i), =max([.sub.1, .sub.2, . . . .sub.N]) and {tilde over (d)}.sub.1(p)=.sub.i=1.sup.Nw.sub.ix.sub.i(g,p).

    [0081] In the g-th round of testing, the relative error e.sub.g(p) is defined as follows.

    [00037] e g ( p ) = y g ( p ) - d g ( p ) d l ( p ) ( 26 ) where y g ( p ) = .Math. i = 1 N W i x i ( k ) .

    [0082] {tilde over (w)}.sub.i represents the weight obtained after this round of training.

    [0083] To assess the results of this round of training, the average relative error during the testing phase is defined as follows:

    [00038] e _ g = 1 P .Math. p = 1 P .Math. "\[LeftBracketingBar]" e g ( k ) .Math. "\[RightBracketingBar]" ( 27 )

    [0084] Still taking the nonlinear neural network with two input nodes as an example to illustrate the test results of the neural network based on DNA strand replacement reaction. The raw data for the test are as follows.

    [0085] U.sub.1{1.27V, 1.31V, 1.34V, . . . , 1.90V}, U.sub.2{1.24V, 1.28V, 1.31V, . . . , 1.86V} and I{0.77A, 0.80A, 0.83A, . . . , 1.64A} and there are 30 sets of data in total. As illustrated in FIG. 12, average relative error of 30 sets of data in each round of 20 rounds of testing are all about 0.1, indicating the neural network based on DNA strand displacement reaction basically meets the testing requirements.

    [0086] The above embodiments are only used to illustrate the technical solution of the present disclosure, and not to limit it. Although the present disclosure has been described in detail with reference to the aforementioned embodiments, those skilled in the art should understand that they can still modify the technical solutions recorded in the aforementioned embodiments, or equivalently replace some or all of the technical features. And these modifications or replacements do not make the essence of the corresponding technical solutions deviate from the scope of the technical solutions of the various embodiments of the present disclosure.