Method for constructing episodic memory model based on rat brain visual pathway and entorhinal-hippocampal cognitive mechanism
20240160221 ยท 2024-05-16
Inventors
- Naigong Yu (BEIJING, CN)
- Yishen Liao (BEIJING, CN)
- Zongxia Wang (BEIJING, CN)
- Hejie Yu (BEIJING, CN)
- Jianjun Yu (BEIJING, CN)
- Xudong Liu (Beijing, CN)
- Ruihua Wang (BEIJING, CN)
Cpc classification
International classification
Abstract
A method for constructing episodic memory model based on rat brain visual pathway and entorhinal-hippocampal structure mainly applied to environment cognition and navigation of an intelligent mobile robot to complete tasks of environment cognition map construction and target-oriented navigation is provided. The image information of the environment, the head-direction angle and speed of the robot are collected, and then the head-direction angle and speed of the robot are input into the entorhinal-hippocampal CA3 neural computational model to obtain the robot's precise position. The visual information is input into the computational model of the visual pathway to obtain the scene information in the current vision of the robot. The above two kinds of information are fused and stored in a cognitive node with the topological relationship. Utilizing scenario information to correct the path integration errors during the exploration process of the robot, thereby constructing the episodic cognitive map representing the environment.
Claims
1. A method for constructing episodic memory model based on rat brain visual pathway and entorhinal-hippocampal cognitive mechanism, comprising the following steps: step 1. a robot explores the environment, collects RGB image information of the environment through a camera, and collects head-direction angle and speed information of the robot through gyroscope and encoder; step 2. input the head-direction angle and speed information into an entorhinal-hippocampus CA3 neural computing model to obtain the robot's position information in the environment; step 3. input the RGB image information into a visual pathway computing model to obtain environmental features within robot's field of view, including the number of objects in the environment, attribute information of the objects, angles of the objects relative to the robot, and distances between objects and the robot; step 4. construct cognitive nodes: the robot constructs a new cognitive node every time it moves, and continuously constructs cognitive nodes in the process of exploring environment; there are topological connections between adjacent cognitive nodes. among them, the i-th cognitive nodes are represented by e.sup.i, which are used to store current scenario information, position, and head-direction angle; a mathematical expression of e.sup.i is as follows;
e.sup.i={?.sub.0.sup.i, (X.sub.env.sup.i, Y.sub.env.sup.i), (n.sub.i.sup.object, {?.sub.ij}, {?.sub.ij}, {d.sub.ij})}(1) wherein, ?.sub.0.sup.i represents the robot's head-direction angle at the i-th cognitive node, (X.sub.env.sup.i, Y.sub.env.sup.i) represents the robot's position in the environment at the i-th cognitive node, (n.sub.i.sup.object, {?.sub.ij}, {?.sub.ij}, {d.sub.ij}) represents the environmental features within the robot's field of view at the i-th cognitive node, and n.sub.i.sup.object represents the number of objects at the i-th cognitive node, ?.sub.ij represents the attribute of the j-th object at the i-th cognitive node, ?.sub.ij represents the orientation angle of the j-th object at the i-th cognitive node relative to the robot, and d.sub.ij represents a distance between the j-th object at the i-th cognitive node and the robot; step 5. construct an episodic cognition map of environmental expression; step 2 further comprises the following steps: s1.1 input the head-direction angle and speed information of robot into a firing model of stripe cells to obtain a firing rate of stripe cells; s1.2 input the firing rate of stripe cells into a firing model of grid cells to obtain a firing rate of grid cells; s1.3 input the firing rate of grid cells into a firing model of dentate gyrus neurons to obtain a firing rate of dentate gyrus neurons, and then input the firing rate of grid cells and the firing rate of dentate gyrus neurons into hippocampal CA3 place cell firing model, obtain a firing rate of hippocampal CA3 place cells; s1.4 calculate the position of the robot in the environment based on the firing rate of hippocampal CA3 place cells; a mathematical expression of the firing rate of stripe cells is given as:
V.sub.stripe(t)=cos(2?f.Math.?v.sub.HDdt)+cos(2?f.sub.d.Math.?v.sub.HDdt) (2) in formula (2), t represents the time at the current moment, f represents an oscillation frequency of neuron cell body, f.sub.d represents an oscillation frequency of neuron dendrites; ?v.sub.HDdt represents a path integral along a preferred direction angle ?.sub.HD of the stripe cells, where v.sub.HD represents a component velocity of the rat at the preferred direction angle ?.sub.HD, and its mathematical expression is as follows:
v.sub.HD=v cos(???.sub.HD) (3) in formula (3), v represents a current moving speed of the robot, and ? represents a current head-direction angle of the robot, a mathematical expression of neuron dendritic oscillation frequency f.sub.d can be obtained as:
f.sub.d=f+B.sub.1v cos(???.sub.HD) (4) where B.sub.1 is a reciprocal of a wavelength of a stripe wave, and the grid cell firing model is obtained by superimposing the firing rates of three stripe cells with a difference of 120? in the preferred direction, the specific mathematical expression is:
g(t)=?.sub.HD(cos(2?f.Math.?v.sub.HDdt)+cos(2?(f+Bv cos(???.sub.HD)).Math.?v.sub.HDdt)) (5) in formula (5), values of the three stripe cell preferred direction angles ?.sub.HD are ?.sub.g+0?, ?.sub.g+120?, ?.sub.g+240? respectively, where ?.sub.g represents a deviation angle of the stripe cells, and its value ranges from random selection within 0?-360?; ?.sub.g also represents an orientation angle of a grid field; after the firing rate of grid cells is obtained, it is used as a forward input signal of the dentate gyrus neurons, and the mathematical expression of the excitatory input I.sub.i.sup.MEC(t) received by the i-th dentate gyrus neuron is:
F.sub.i.sup.dentate(t)=I.sub.i.sup.MEC(t).Math.H(I.sub.i.sup.MEC(t)?(1?k.sub.1).Math.I.sub.max.sup.MEC) (9) in formula (9), F.sub.i.sup.dentate represents the firing rate of dentate gyrus neurons, k.sub.1 is 0.1, I.sub.max.sup.MEC represents a maximum value of grid cell forward input received by dentate gyrus neurons; H(x) is a rectification function, when x>0, H(x)=1; otherwise, when x?0, the function value is 0; and the excitatory input signal from the dentate gyrus neuron to the hippocampal CA3 place cell is as follows:
I.sub.i.sup.CA3(t)=I.sub.i.sup.MEC(t)+I.sub.av.sup.MEC(t)I.sub.i.sup.dentate(t) (12) in formula (12), I.sub.i.sup.MEC(t) and I.sub.i.sup.dentate(t) are respectively forward input signals of grid cells and dentate gyrus neurons, and I.sub.av.sup.MEC(t) represents an average strength of grid cell forward input signals, and its mathematical expression is:
F.sub.i.sup.CA3(t)=I.sub.i.sup.CA3(t).Math.H(I.sub.i.sup.CA3(t)?(1?k.sub.2).Math.I.sub.max.sup.CA3) (14) in formula (14), I.sub.max.sup.CA3 represents a maximum value of total excitation input signal received by hippocampal CA3 place cells, and a value of k.sub.2 is 0.1; s1.4 further includes the following steps: construct a place cell plate model which is capable for encoding a given spatial region; a shape of the cell plate is a square, and a side length of the cell plate is N.sub.x, and obtain position coordinates of the robot in given spatial region; wherein, a position of current robot in the coding space region of current place cell plate is calculated by formula (15):
e.sub.object_middle=p.sub.graph_middle?p.sub.object_middle (17) p.sub.graph_middle represents a pixel value in the center of the field of view, p.sub.object_middle represents an average position of the left and right boundaries of the object to be detected in the image, and the mathematical expression of the given value of the current rotation speed ? obtained by the PID algorithm is: ?y.sub.ik and ??.sub.0.sup.ik of the cognitive nodes, which is shown in formula (20);
Y.sub.env.sup.i and X.sub.env.sup.k
Y.sub.env.sup.k represent the horizontal and vertical coordinates of the place field's center corresponding to the cognitive points e.sup.i and e.sup.k respectively, d.sub.ik represents the distance between the place field's center corresponding to the cognitive point e.sup.i and e.sup.k, ?.sub.0.sup.i and ?.sub.0.sup.k respectively represents the head-direction angles at cognitive points e.sup.i and e.sup.k; after the change amount is obtained, the corrected node parameters can be iteratively calculated step by step according to the change amount, and the relevant mathematical expressions are shown in formula (21) and (22); in formula (21) and (22), t and t+1 represent the time before and after each iterative operation, respectively, and ? represents the correction rate of the cumulative error;
Description
DESCRIPTION OF DRAWINGS
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
PREFERRED EMBODIMENT
[0028] The present invention will be described in detail below in conjunction with the accompanying drawings and examples.
[0029]
[0030] Specific steps are as follows:
1. Construction of Entorhinal-Hippocampal CA3 Neural Computing Model
[0031] Physiological studies have shown that speed and head-direction angle information are input to the hippocampal CA3 structure through the entorhinal-hippocampal information transmission pathway in rat brain, and form a representation of its own pose.
[0032] Based on this, the present invention proposes a method for constructing an entorhinal-hippocampal CA3 neural computing model, which obtains robot position information in a bionic manner.
Firstly, the mathematical expression of firing rate of stripe cells in two-dimensional space is given as:
V.sub.stripe(t)=cos(2?f.Math.?v.sub.HDdt)+cos(2?f.sub.d.Math.?v.sub.HDdt) (2)
[0033] In formula (2), t represents the time at the current moment, f represents the oscillation frequency of neuron cell body, and its value is randomly selected within the range of 0-256 Hz, f.sub.d represents the oscillation frequency of neuron dendrites. ?v.sub.HDdt represents the path integral along the preferred direction angle ?.sub.HD of the stripe cells, where v.sub.HD represents the component velocity of the rat at the preferred direction angle ?.sub.HD, and its mathematical expression is as follows:
v.sub.HD=v cos(???.sub.HD) (3)
[0034] In formula (3), v represents the current moving speed of the robot, and ? represents the current head-direction angle of the robot. The meaning expressed by formula (2) is the interaction of waveforms corresponding to two frequencies, and a new waveform is presented in one-dimensional space, called stripe wave. The envelope of its waveform will have a relatively slow beat frequency, which is the oscillation frequency of the stripe wave. Set the frequency be f.sub.b, and its mathematical expression is:
f.sub.b=f.sub.d?f (4)
[0035] The mathematical expressions of fringe wave oscillation frequency f.sub.b and its mathematical expression is shown in formula (5):
f.sub.b=v.sub.HD/?.sub.b=B.sub.1v cos(???.sub.HD) (5)
[0036] In formula (4), ?.sub.b represents wavelength of the stripe wave, and its value is randomly selected within the range of 0.05 m?100 m, and B.sub.1 represents the reciprocal of the stripe wave wavelength. Combining formula (3) and formula (5), the mathematical expression of neuron dendritic oscillation frequency f.sub.d can be obtained as:
f.sub.d=f+B.sub.1v cos(???.sub.HD) (6)
[0037] Physiological studies have shown that when the preferred direction angles of the three stripe cells differ by 120?, the stripe waves generated by them can spatially form a regular hexagonal grid field throughout the entire space through the oscillation interference mechanism in two-dimensional plane.
g(t)=?.sub.HD(cos(2?f.Math.?v.sub.HDdt)+cos(2?(f+Bv cos(???.sub.HD)).Math.?v.sub.HDdt)) (7)
[0038] In formula (7), the values of the three stripe cell preferred direction angles ?.sub.HD are ?.sub.g+0?, ?.sub.g+120?, ?.sub.g+240? respectively, where ?.sub.g represents the deviation angle of the stripe cells, and its value ranges from random selection within 0??360?. ?.sub.g also represents the orientation angle of the grid field. After the grid cell firing rate is obtained, it is used as the forward input signal of the dentate gyrus neurons, and the mathematical expression of the excitatory input signal transmitted by the grid cell group to the dentate gyrus neurons is:
[0039] In formula (8), i and j represent the numbers of dentate gyrus neurons and grid cells respectively, g.sub.j(t) represents the firing rate of the j-th grid cell, and n.sub.grid represents the number of grid cells.
W represents the excitatory input connection weight matrix, where W.sub.ij represents the connection weight from the j-th grid cell to the i-th dentate gyrus neuron, and the calculation formula of each connection weight is as follows:
[0040] In formula (9), s represents synapse size, and the value is randomly selected in the range of (0?0.2)?m.sup.2. Each size of s corresponds to its proportion in all synapses P(s) roughly obeys the following mathematical expression:
[0041] In formula (10), A=100.7, B=0.02, ?.sub.1=0.022, ?.sub.2=0.018, ?.sub.3=0.15. The excitatory input connection weight matrix W can be assigned by formula (10) and formula (11), so as to realize the excitatory transmission from grid cells to dentate gyms neurons. Firing activity of dentate gyrus neurons within a given spatial region is subject to a WTA learning rule that describes competing activity arising from gamma-frequency feedback inhibition. The mathematical expression of the firing rate of dentate gyrus neurons is:
F.sub.i.sup.dentate(t)=I.sub.i.sup.MEC(t).Math.H(I.sub.i.sup.MEC(t)?(1?k.sub.1).Math.I.sub.max.sup.MEC) (11)
[0042] In formula (11), k.sub.1 is 0.1, and its value determines which dentate gyrus neurons will be activated according to the WTA learning rule. I.sub.max.sup.MEC represents the maximum value of grid cell forward input received by dentate gyrus neurons. H(x) is a rectification function, when x>0, H(x)=1; otherwise, when x?0, the function value is 0. After obtaining the firing rate expression of dentate gyrus neurons, the excitatory input signal I.sub.i.sup.dentate(t) from dentate gyrus neurons to hippocampal CA3 place cells can be calculated, as shown in formula (12), and its calculation method is similar to formula (8).
[0043] In formula (12), i and j represent the serial numbers of hippocampal CA3 place cells and dentate gyrus neurons respectively, and n.sub.dentate represents the number of dentate gyrus neurons, which is set to 1000. F.sub.max.sup.dentate represents the maximum firing rate of neurons in the dentate gyms. Since F.sub.i.sup.dentate(t) is always greater than zero, dividing it by the maximum firing rate is similar to normalization. ? represents the excitatory input connection weight matrix, where ?.sub.ij represents the connection weight from the j-th dentate gyrus neuron to the i-th hippocampal CA3 place cell, and the value ranges from 0-1. Distribution function of the connection weight value is defined as a non-negative Gaussian distribution, and the mathematical expression is as follows:
[0044] In formula (13), A.sub.2=1.033, ?=24, ?=13. The excitatory input connection weight matrix ? can be assigned by formula (13), so as to realize the excitatory transmission from the dentate gyrus neurons to the hippocampal CA3 place cells. The hippocampal CA3 place cells of the hippocampus receive forward input from the neurons of the entorhinal cortex and the dentate gyrus at the same time, so the mathematical expression of the total excitatory input signal received by the hippocampal CA3 place cells is:
I.sub.i.sup.CA3(t)=I.sub.i.sup.MEC(t)+I.sub.av.sup.MEC(t)I.sub.i.sup.dentate(t) (14)
[0045] In formula (14), I.sub.i.sup.MEC(t) and I.sub.i.sup.dentate(t) are respectively the forward input signals of grid cells and dentate gyrus neurons mentioned above, and I.sub.av.sup.MEC(t) represents the average strength of grid cell forward input signals, and its mathematical expression is:
[0046] In formula (15), n.sub.CA3 represents the number of hippocampal CA3 place cells, which is set as 1600. Then the expression of firing rate of hippocampal place cells can be obtained, the mathematical expression is as follows, and the calculation method is similar to formula (10).
F.sub.i.sup.CA3(t)=I.sub.i.sup.CA3(t).Math.H(I.sub.i.sup.CA3(t)?(1?k.sub.2).Math.I.sub.max.sup.CA3) (16)
[0047] In formula (16), I.sub.max.sup.CA3 represents the maximum value of the total excitation input signal received by hippocampal CA3 place cells, and the value of k.sub.2 is 0.1. The information transfer mapping model from the entorhinal cortex to the CA3 region of the hippocampus can be established through formulas (2) to (16).
2. Construction of Position Recognition Model
[0048] In order to make the model have the ability of position cognition and realize quantification of place cell firing rate in the actual physical space, a spatial position recognition model composed of hippocampal CA3 place cells was established. Firstly, all hippocampal CA3 place cells were arranged in sequence into a cell plate model capable of representing position, and the shape of the cell plate was square. It can be seen from above that the number of hippocampal CA3 place cells is n.sub.CA3, then the side length of the cell plate N.sub.x=?{square root over (n.sub.CA3)}=40 and the corresponding coding area are set as a square area, the side length of the area is L, and the value is preferably in the range of 5 m?20 m. Therefore, the mathematical expression of the place field center coordinates of each place cell is as follows:
[0049] In formula (17), i, j respectively represent the number of columns and rows of the current place cell on the cell plate, and r.sub.ij represents the coordinates of the center of the place field of the place cell. Modeling hippocampal CA3 place cells as a square cell plate enables forward inputs generated by the entorhinal cortex to be represented on the plate as packets of excitatory activity. There is also an interaction between hippocampal CA3 place cells. In local connections, hippocampal CA3 place cells excite and inhibit surrounding cells through synaptic branches, and eventually the nerve cells with the strongest excitability win the competition, forming a single peak exciting activity pack.
[0050] A two-dimensional Gaussian distribution is used to create the excitability weight connection matrix ?.sub.m,n of hippocampal CA3 place cells, where the subscripts m and n represent the distance between the horizontal and vertical coordinates of the unit in the coordinate system X and Y respectively, and its value are both set to 15. The mathematical expression of the weight distribution of the excitatory weight connection matrix is:
[0051] In formula (18), k.sub.p represents the constant of position distribution width, and the value is 7. The amount of change in hippocampal CA3 place cell activity at time t due to local excitatory connections is:
[0052] In formula (19), p.sub.i,j.sup.t represents the firing rate of place cells in row i, column j on the cell plate at time t after interaction, and its initial value is the firing rate F.sub.i.sup.CA3(t) of hippocampal CA3 place cells, and the output of inhibitory signals of hippocampal CA3 place cells occurs partly arousal works after connection, not simultaneously. The symmetry of excitatory and inhibitory connectivity matrices guarantees proper neural network dynamics, ensuring that attractors in space are not excited indefinitely. The activity change of hippocampal CA3 place cells caused by the inhibitory connection weight at time t is:
[0053] In formula (20), ?.sub.m,n is the inhibitory connection weight, which controls the global inhibition level, and its value is 0.00002. Since the activities of all cells at hippocampal CA3 sites were non-zero and normalized, in order to ensure that the firing rate of all place cells at all times was not less than zero, the firing rate of all place cells was compared with 0, and the results were normalized, the mathematical expression is as follows:
t and t+1 in formulas (21) and (22) represent the current moment and the next moment respectively. Through the modeling method of formula (16) to formula (21), the forward input from the entorhinal cortex can be represented on the cell plate in the form of excitatory activity packets. Then by obtaining the position of the exciting activity package on the cell plate, position of the current robot in space area encoded by the cell plate at the current position can be obtained, and the mathematical expression is as follow:
[0054] In formula (23), P.sub.x.sup.t and P.sub.y.sup.t represent the abscissa and ordinate of the excitatory activity packet on the place cell plate at time t, respectively. In order to make the model not limited to the spatial cognition in the encoding area, border cells with specific firing effects on the area boundary were introduced. Border cell firing stimulates a resetting of stripe cell firing activity when an encoded region boundary is reached, enabling rats to recognize position within arbitrarily sized spatial regions.
The specific implementation method is as follows: at the initial moment, the rat is set to be located in the center of the square area encoded by the place cell plate, and when the rat reaches any boundary of the given encoding area space, the path integration ?v.sub.HDdt of all stripe cells in the direction of preferred angle ?.sub.HD is set to zero, so that the rat is in the center of the positive direction area coded by the place cell plate after reset. In this way, every time the firing reset of stripe cells is completed, the place cell plate can immediately generate a code for a new spatial region, thereby completing the robot's position cognition for any size space.
[0055] The initial position of the robot movement is located in the center of the square area encoded by the place cell plate. The physical coordinate system is defined with the initial movement position as origin, and the horizontal direction of place cell plate is positive direction of X-axis. The physical coordinate systems mentioned below are all for this coordinate system. Then the mathematical expression of the position coordinates (X.sub.env.sup.t, Y.sub.env.sup.t) of the robot in any size space area is as follows:
[0056] In formula (24), ? is the proportional coefficient for transforming the coordinates on the place cell plate to the real position coordinates, and its value is the ratio of side length L of the square coding area to the side length N.sub.x of the place cell plate.
[0057] Q.sub.X and Q.sub.Y respectively represent the horizontal and vertical coordinates of the rat in any size space area when the place cell plate was reset last time. The position of the rat in any size of the space area can be obtained through the above calculation, which provides accurate position information for the construction of the subsequent cognitive node.
3. Construction of Visual Pathway Calculation Model
[0058] The purpose of constructing visual pathway calculation model is scenario cognition, that is, when the robot explores in the environment, it can first accurately identify the attributes of all objects in current field of view and simulate the function of the what pathway; then, for each identified object individually, calculate its orientation angle and distance information relative to the current robot, and simulate the function of the where pathway. The object detection algorithm in the present invention adopts the DPM algorithm with strong robustness. However, at this stage, most object position recognition algorithms are estimated directly by combining the depth map with the position of the recognized object in the RGB map, and this type of method has a large calculation error. To solve this problem, the present invention proposes a object position recognition algorithm. By rotating the robot, the object to be detected is placed in the center of the field of view, and then the distance between the object and the robot is obtained by using the depth camera.
[0059] In actual physical experiment, RGB image pixels collected by the robot are set to 1920*1080, and the pixel value in center field of view is p.sub.graph_middle=1920/2. The rotation control of the robot is realized through the differential speed of the left and right wheels, that is, when the left and right wheels of the robot move in opposite directions at the same speed, the robot can rotate in place, and the rotation speed is set to ?.
[0060] When the robot explores in the environment, it will face a new scene every time it moves, and define i as the scene number. Firstly, the number of objects n.sub.i.sup.object in the i-th scene is identified by the DPM algorithm, and the current head-direction angle is ?.sub.0.sup.i. The serial number of the currently detected object in the i-th scene is j, and the attribute of the j-th object to be detected is defined as ?.sub.ij. Then calculate the orientation angle information of each object in turn: calculate the average value of the left and right boundaries of the j-th object to be detected obtained by the DPM algorithm in the image, and obtain the pixel position of the center of the object in the horizontal direction in the image, set it as p.sub.object_middle. In order to place the object to be detected in the center field of view, rotation speed of the robot is controlled by the PID algorithm for closed-loop control.
[0061] The mathematical expression of the current pixel deviation e.sub.object_middle is:
e.sub.object_middle=p.sub.graph_middle?p.sub.object_middle (25)
Then the mathematical expression of the given value of current rotation speed ? obtained by the PID algorithm is:
[0062] In formula (26), k.sub.P, k.sub.I, k.sub.D respectively represent the proportional, integral, and differential coefficients of the PID controller, and the selection of their values is related to the actual physical environment and the hardware structure and configuration of the robot. When the object to be detected is placed in the center field of view, record the orientation angle ? of the robot head at this time, then the direction angle of the j-th object in the i-th scene relative to the robot before rotation is ?.sub.ij=???.sub.0.sup.i. At the same time, the depth camera is used to obtain the distance d.sub.ij between the robot and the object, through the above operations, the orientation angle and distance information of the j-th object relative to the robot at the current moment can be obtained. After the information of all objects in the current scene is obtained, rotate the robot's head-direction angle to ?.sub.0.sup.i, continue to explore and recognize in the environment. The acquisition of scenario information lays the foundation for the construction of subsequent cognitive maps.
4. Construction of Cognitive Nodes for Episodic Memory
[0063] Place cells in hippocampal CA1 area are neurons stimulated by angle, speed and visual information, and are the basic unit for constructing environmental cognitive maps. Therefore, a single place cell in hippocampal CA1 can be called as a cognitive node. A cognitive map consists of several cognitive nodes with topological relationships Cognitive nodes correspond to scenario information, and a new cognitive node will be established every time the robot moves.
[0064] The i-th cognitive node can be expressed by e.sup.i, which stores current scene information and pose information, and its mathematical expression is shown in formula (1). Wherein, ?.sub.0.sup.i (X.sub.env.sup.i, Y.sub.env.sup.i)
(n.sub.i.sup.object, {?.sub.ij}, {?.sub.ij}, {d.sub.ij}) represent the head-direction angle, position and scene information at the cognitive node, respectively. The head-direction angle and position were obtained from the entorhinal-hippocampal CA3 neural computing model; the scene information was obtained from the visual pathway computing model, and the position coordinates also represented the central coordinates of the firing field of the hippocampal CA1 place cells. There is also a connection between a single cognitive node and other cognitive nodes, and each cognitive node e.sup.i has a topological connection relationship with its upper and lower cognitive nodes (that is, there is a topological connection relationship between adjacent cognitive nodes). When the current scenario information output by the visual pathway matches the scenario information stored in the generated cognitive nodes, the connection between the current cognitive point and the matching cognitive point is established.
[0065] The steps for judging whether the scenario information of two cognitive nodes match are as follows: if there are two cognitive nodes e.sup.a and e.sup.b, first judge whether the number of objects in the two scenarios is the same and whether the attributes of the corresponding objects are consistent, if one of the above conditions is not satisfied, it is judged that the two scenarios do not match; otherwise, by measuring whether the orientation angle information of each object in the scenario is consistent, the mathematical expression of the measurement function S(e.sup.a, e.sup.b) is:
[0066] In formula (27), ?.sub.? and ?.sub.d represent the weights of direction information and distance information respectively, ?.sub.101 +?.sub.d=1, and the values of the two should be selected in combination with the actual physical scene and the units of angle and distance. Generally, when the angle is in radians and the distance is in meters, the value of ?.sub.? is between 0.1-0.3, and the value of ?.sub.d is between 0.7-0.9. Set the matching threshold as S.sub.th, and select an appropriate value according to the actual situation. When the value of the metric function is less than the matching threshold, it is judged that the two scenes match, and at this time the topological relationship between cognitive nodes e.sup.a and e.sup.b is established; and vice versa.
[0067] In the process of continuous accumulation of cognitive nodes, their relative errors are also accumulated, resulting in a mismatch between the position of the robot itself and the current actual position. Therefore, it is necessary to use its topology to adjust the position of cognitive nodes. It is known that the current cognitive node is e.sup.i, and the cognitive node associated with it is e.sup.k. This represents that there is a topological relationship between node e.sup.i and node e.sup.k. Then the mathematical expression of the pose correction of cognitive nodes e.sup.i and e.sup.k is as follows.
[0068] Firstly, calculate the change amount of ?x.sub.ik ?y.sub.ik and ??.sub.0.sup.ik of the cognitive nodes, which is shown in formula (28).
[0069] In formula (28), X.sub.env.sup.i Y.sub.env.sup.i and X.sub.env.sup.k
Y.sub.env.sup.k represent the horizontal and vertical coordinates of the place field's center corresponding to the cognitive points e.sup.i and e.sup.k respectively, d.sub.ik represents the distance between the center of the place field corresponding to the cognitive point e.sup.i and e.sup.k, ?.sub.0.sup.i and ?.sub.0.sup.k respectively represents the head-direction angles at cognitive points e.sup.i and e.sup.k. After the change amount is obtained, the corrected node parameters can be iteratively calculated step by step according to the change amount, and the relevant mathematical expressions are shown in formula (29) and (30).
[0070] In formula (29) and (30), t and t+1 represent the time before and after each iterative operation, respectively, and ? represents the correction rate of the cumulative error, which is 0.5. In the actual cognitive map construction process, as the number of iterations increases, the value of the map update amount gradually decreases. At this time, the effect of iteratively updating the map is not significant and consumes the computing time of the processor, which affects the real-time performance of the algorithm. Based on this, this invention proposes a method for judging the convergence of cognitive maps, the specific steps are as follows. First, define the map convergence at time t as ?d(t), and its mathematical expression is shown in formula (31).
[0071] In formula (31), n.sub.sum represents the total number of current cognitive nodes, and n.sub.i represents the number of nodes associated with cognitive node i. Set the scale factor of the convergence criterion is ?, and the value is selected according to the actual situation, usually within the range of 0.0001-0.005. When ?d(t)??d(t+1)<??d(t+1), it is judged that there is no need to continue the map update iteration at this time; otherwise, continue to perform the update iteration of cognitive map construction. After obtaining the topological cognitive map of the environment and the scenario information, they can be fused to obtain the episodic cognitive map of the environment. The specific method is: according to the position of the robot in the physical coordinate system obtained above and the orientation angle and distance information of the object relative to the robot, the position of all objects in the physical coordinate system can be calculated, and each object is calculated according to the attribute and position information insert in the physical coordinate system including the topological map to obtain the episodic cognition map of the environmental expression.