Information processing device, information processing method, and storage medium
11686743 · 2023-06-27
Assignee
Inventors
Cpc classification
A61B5/1107
HUMAN NECESSITIES
A61B5/1121
HUMAN NECESSITIES
A61B2562/0219
HUMAN NECESSITIES
A61B2562/04
HUMAN NECESSITIES
International classification
Abstract
Provided is an information processing device including: a storage device having a program stored therein; and a hardware processor, wherein the hardware processor executes the program stored in the storage device to: acquire first data for a dimension of position relating to an object represented in a generalized coordinate system; acquire at least second data for a dimension of acceleration from a plurality of inertial sensors attached to the object; and convert the second data into third data for a dimension of acceleration represented in the generalized coordinate system on the basis of the first data.
Claims
1. An information processing device comprising: a storage device having a program stored therein; and a hardware processor, wherein the hardware processor executes the program stored in the storage device to: acquire first data having a dimension of position of an object represented in a generalized coordinate system, wherein the object is a human body and defined as including a plurality of segments and a joint that links two or more of the segments, the plurality of segments include a left foot and a right foot in the human body, and the joint is a knee in the human body, and the generalized coordinate system includes at least a rotation angle around one or more axes for each joint as a variable; acquire at least second data having a dimension of acceleration from a plurality of inertial sensors attached to the plurality of segments and the joint of the object; derive a transformation rule on the basis of the first data; convert the second data into third data having a dimension of acceleration represented in the generalized coordinate system by applying the derived transformation rule to the second data; and estimate a torque which is generated in the joint, a first ground reaction force which is applied to the left foot, and a second ground reaction force which is applied to the right foot on the basis of the first data, fourth data having a dimension of speed relating to the object obtained by differentiating the first data, and the third data.
2. The information processing device according to claim 1, wherein the hardware processor further executes the program to estimate the torque which is generated in the joint, the first ground reaction force, and the second ground reaction, by performing one or both of a forward dynamics calculation and an inverse dynamics calculation.
3. An information processing method comprising causing a computer to: acquire first data having a dimension of position of an object represented in a generalized coordinate system, wherein the object is a human body and defined as including a plurality of segments and a joint that links two or more of the segments, the plurality of segments include a left foot and a right foot in the human body, and the joint is a knee in the human body, and the generalized coordinate system includes at least a rotation angle around one or more axes for each joint as a variable; acquire at least second data having a dimension of acceleration from a plurality of inertial sensors attached to the plurality of segments and the joint of the object; derive a transformation rule on the basis of the first data; convert the second data into third data having a dimension of acceleration represented in the generalized coordinate system by applying the derived transformation rule to the second data; and estimate a torque which is generated in the joint, a first ground reaction force which is applied to the left foot, and a second ground reaction force which is applied to the right foot on the basis of the first data, fourth data having a dimension of speed relating to the object obtained by differentiating the first data, and the third data.
4. A computer readable non-transitory storage medium having a program stored therein, the program causing a computer to: acquire first data having a dimension of position of an object represented in a generalized coordinate system, wherein the object is a human body and defined as including a plurality of segments and a joint that links two or more of the segments, the plurality of segments include a left foot and a right foot in the human body, and the joint is a knee in the human body, and the generalized coordinate system includes at least a rotation angle around one or more axes for each joint as a variable; acquire at least second data having a dimension of acceleration from a plurality of inertial sensors attached to the plurality of segments and the joint of the object; derive a transformation rule on the basis of the first data; convert the second data into third data having a dimension of acceleration represented in the generalized coordinate system by applying the derived transformation rule to the second data; and estimate a torque which is generated in the joint, a first ground reaction force which is applied to the left foot, and a second ground reaction force which is applied to the right foot on the basis of the first data, fourth data having a dimension of speed relating to the object obtained by differentiating the first data, and the third data.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
DESCRIPTION OF EMBODIMENTS
(13) Hereinafter, an embodiment of an information processing device, an information processing method, and a storage medium of the present invention will be described with reference to the accompanying drawings.
(14) In the present specification, a vector q is expressed by [ ] as [q], a matrix T is expressed by < > as <T>, first-order differentiation of the vector q is expressed as [q]′, and second-order differentiation is expressed as [q]″ (however, except Expression (2)).
(15) An information processing device is a device that acquires at least data for a dimension of acceleration from a plurality of inertial sensors (IMU sensors) attached to an object such as a human body, acquires data for a dimension of position relating to an object represented in a generalized coordinate system, and converts data of an acceleration into data for a dimension of acceleration represented in the generalized coordinate system on the basis of data for a dimension of position relating to an object represented in the generalized coordinate system, or estimates at least one of an external force acting on an object and a torque which is generated in the joint of the object on the basis of the conversion result. The data for a dimension of position includes both translational displacement and a rotation angle.
(16) An object is not limited to a human body insofar as it includes segments (things that may be considered to be rigid bodies such as an arm, a hand, a leg, a foot, and the like in analytical mechanics, in other words, links) and a joint that links two or more segments. In the following description, a human body that is an object is referred to as a “target,” and a plurality of inertial sensors are assumed to be attached to parts on a human body.
(17)
First Embodiment
(18)
(19) The posture estimation filter processing unit 10 generates data for a dimension of position relating to the target TGT (first data) represented in the generalized coordinate system on the basis of outputs of the IMU sensors JS-k. The posture estimation filter processing unit 10 is an example of a configuration for acquiring the first data. The information processing device 1 is another example of a configuration for acquiring the first data, and may include a unit that acquires data for a dimension of position generated by optical motion capture. The posture estimation filter processing unit 10 generates data for a dimension of position relating to the target TGT represented in the generalized coordinate system using, for example, a means referred to as an articulated human filter (AHF).
(20) Here, a generalized coordinate system will be described. The generalized coordinate system is a coordinate system in which, in a case where the target TGT is formed as a model, the posture of the target TGT can be represented depending on a variable according to the degree of freedom of the model.
(21) The posture of the target TGT represented in the generalized coordinate system (hereinafter referred to as a “generalized position”) [q.sub.m] is a collection of variables having a dimension of position, and is represented by, for example, Expression (1). In the expression, [q.sub.0] is a vector having a total of seven elements in four elements of a quaternion indicating the X coordinate, Y coordinate, Z coordinate, and posture of the base segment SG(B). In addition, [q.sub.i] is a vector of a rotation angle having the number of dimensions according to the degree of freedom for each joint JT(i) (i=1 to N). For example, in the case of a joint equivalent to the knee, the vector is defined as having two degrees of freedom of the bending direction of the knee and the torsion direction of the shin, and thus is represented by a two-dimensional vector. In the case of a joint equivalent to the hip joint, the vector is defined as having three degrees of freedom, and thus is represented by a three-dimensional vector.
[q.sub.m]={[q.sub.0].sup.T,[q.sub.1].sup.T, . . . [q.sub.i].sup.T, . . . [q.sub.N].sup.T} (1)
(22) Processing performed by the posture estimation filter processing unit 10 will be described in more detail. The processing performed by the posture estimation filter processing unit 10 is not specifically restricted, and any of the methods exemplified below may be adopted therein. As a simple example, the posture estimation filter processing unit 10 may calculate the posture of the segment SG (a yaw angle, a roll angle, and a pitch angle) by integrating the value of angular velocity measured by an IMU sensor JS-i, and perform a process using a method of correcting a measurement error of the angular velocity by gravitational acceleration.
(23) The posture estimation filter processing unit 10 may perform posture estimation using a Madgwick filter. A Madgwick filter is a means for representing a posture in a quaternion and performing posture estimation at a high speed and with a high degree of accuracy by performing the correction of angular velocity using a gradient descent method. In the Madgwick filter, the posture estimation is performed by solving an optimization problem represented by Expression (2).
(24)
(25) The argument (.sup.S.sub.Eq hat) of a function f is an estimated posture of the IMU sensor JS-I in a sensor coordinate system, the argument (.sup.Ed hat) is a reference direction of gravity, geomagnetism or the like in a world coordinate system, and the argument (.sup.Ss hat) is a measurement value of gravity, geomagnetism or the like in the sensor coordinate system. The processing performed by the posture estimation filter processing unit 10 in a case where the Madgwick filter is used is represented in
(26) The posture estimation filter processing unit 10 may set a reference plane in the case of an object of which the rotation angle of the joint JT is restricted as in a human body, and correct an angular velocity on the basis of an angle between the normal line of the reference plane and the direction of each segment SG.
(27) A generalized position [q.sub.m] which is obtained by the various methods described above is output to the speed and acceleration calculation unit 20, the acceleration conversion unit 30, and the external force and joint torque estimation unit 50.
(28) The speed and acceleration calculation unit 20 calculates, for example, a vector obtained by first-order differentiating the generalized position [q.sub.m] (an example of fourth data; hereinafter referred to as a “generalized speed”) [q.sub.m]′ and a vector obtained by second-order differentiating the generalized position [q.sub.m] (hereinafter referred to as a “generalized acceleration”) [q.sub.m]″ using a difference method (referring to dividing two values having different times by an elapsed time and obtaining the result) with respect to the generalized position [q.sub.m] which is input in a time-series manner.
(29) Data of acceleration (an example of second data) is input to the acceleration conversion unit 30 from each of the IMU sensors JS-k. Since one IMU sensor JS-k detects an acceleration on three axes, data of the acceleration becomes a three-dimensional vector [α.sub.k]. The generalized position [q.sub.m], the generalized speed [q.sub.m]′, and the generalized acceleration [qm]″ are input to the acceleration conversion unit 30.
(30)
(31) The transformation rule derivation unit 32 derives a transformation matrix <T> and a transformation vector [b] from the result of processing performed by the posture estimation filter processing unit 10. The transformation matrix <T> and the transformation vector [b] are an example of a “transformation rule” which is applied to an acceleration vector.
(32) The coordinate conversion unit 34 causes the transformation matrix <T> and the transformation vector [b] to act on (specifically multiply by) an acceleration vector [α], to thereby derive data for a dimension of acceleration (an example of third data; hereinafter referred to as a “modified generalized acceleration”) [q.sub.mmos]″ represented in the generalized coordinate system. Expression (3) is an example of the definitional equation of the acceleration vector [α]. Herein, [α.sub.k] is a vector having accelerations in three directions detected by the IMU sensor JS-k as elements. Expression (4) represents a method of deriving the modified generalized acceleration [q.sub.mod]″. The modified generalized acceleration [q.sub.mod]″ is information in which acceleration is described with respect to an element which is a translational position and angular acceleration is described with respect to an element which is a rotation angle.
[α]={[α.sub.1].sup.T,[α.sub.2].sup.T, . . . ,[α.sub.M].sup.T} (3)
[q.sub.m]″=<T>.Math.[α]+[b] (4)
First Example
(33) Hereinafter, a method of deriving the transformation matrix <T> and the transformation vector [b] will be described. In a case where the number of dimensions of the modified generalized acceleration [q.sub.mmos]″ is set to L, and the number of dimensions of the acceleration vector [α] is set to K, the transformation rule derivation unit 32 derives an L×K transformation matrix <T> and an L-dimensional transformation vector [b] as shown in Expression (5). In the expression, <T.sub.0> is a transformation matrix corresponding to rotation and translational acceleration of the base segment, and is a 6×K matrix. A matrix <T.sub.i> (i=1 to N) corresponds to the angular acceleration of a joint JT-i. In a case where the degree of freedom of the joint JT-i (the number of dimensions of a corresponding vector) is set to P.sub.i, <T.sub.i> becomes a P.sub.i×K matrix. A vector [b.sub.0] is a six-dimensional vector, and a vector [b.sub.i] (i=1 to N) is a P.sub.i-dimensional vector. The reason that [q.sub.0] is a vector consisting of seven elements, whereas <T.sub.0> is a 6×K matrix, and [b.sub.0] is a six-dimensional vector is because a posture (rotation) is represented by a four-dimensional quaternion in a dimension of position, and is represented three-dimensionally in dimensions of speed and acceleration.
(34)
(35) In a case where the generalized position [q.sub.m] is given, the sensor coordinate system speed [v] of the IMU sensor JS-k can be represented by Expression (6) using a Jacobian matrix <J.sub.i>. In this expression, the generalized speed is mapped to the speed of the IMU sensor JS-k. Here, [q.sub.i] is a vector (a scalar in the case of one degree of freedom) indicating an element for each joint included in the generalized position [q.sub.m].
[v]=<J.sub.i>.Math.[q.sub.i]′ (6)
(36) When Expression (6) is arranged for [q.sub.i]″ by performing time differentiation, Expressions (7) and (8) are obtained. Here, a matrix <J.sub.i#> is a pseudo inverse matrix of the matrix <J.sub.i>.
[α]=<J.sub.i>′.Math.[q.sub.i]′+<J.sub.i>.Math.[q.sub.i]″ (7)
[qi]″=<J.sub.i#>.Math.([α]−<J.sub.i>′.Math.[q.sub.i]′) (8)
(37) From the above relation, the transformation rule derivation unit 32 derives <T.sub.i> and [b.sub.i] on the basis of Expressions (9) and (10).
<T.sub.i>=<J.sub.i#> (9)
[bi]=−<J.sub.i#>.Math.<J.sub.i>′.Math.[q.sub.m] (10)
Second Example
(38) The transformation rule derivation unit 32 may derive the transformation matrix <T> and the transformation vector [b] using simpler calculation to be described below. In this method, <T.sub.0> and [b.sub.0] are represented by Expressions (11) and (12). In the expressions, <R.sub.0> is a 3×3 rotation matrix in which the acceleration of a sensor coordinate system measured by the IMU sensor is converted into an absolute coordinate system. In addition, <0> is a matrix in which all elements are zero. In addition, ω.sub.x′, ω.sub.y′, and ω.sub.z′ are angular accelerations around the X-axis, Y-axis, and Z-axis of the base segment, respectively, calculated using a difference method.
(39)
(40) In this case, the transformation rule derivation unit 32 may derive <T.sub.i> and [b.sub.i] on the basis of Expressions (13) and (14). As a result, for i=1 to N, the i-th element of the modified generalized acceleration [q.sub.mod]″ uses the i-th element of the generalized acceleration [q.sub.m]″ as it is.
<T.sub.i>=<0> (13)
[b.sub.i]=i-th element of [q.sub.m]″ (14)
(41) Using the above-described method, the acceleration conversion unit 30 generates data for a dimension of acceleration in the generalized coordinate system on the basis of outputs of the IMU sensors JS-k that measure an acceleration in the sensor coordinate system. Therefore, it is possible to obtain higher-accuracy data for a dimension of acceleration than in a case where the data for a dimension of acceleration is generated, for example, by second-order differentiating the generalized position [qm] using a difference method.
(42) The external force and joint torque estimation unit 50 estimates an external force and a joint torque on the basis of the generalized position [q.sub.m], the generalized speed [q.sub.m]′, and the modified generalized acceleration [q.sub.mod]″ which are generated as described above.
(43) The external force and joint torque estimation unit 50 estimates an external force and a joint torque by solving, for example, an objective function represented by Expression (15) and an optimization problem caused by a restriction represented by Expression (16).
(44)
(45) In Expression (15), [q]″ is, for example, a calculated acceleration which is obtained in forward dynamics calculation shown in Expression (17). For the purpose of comparison with the calculated acceleration [q]″, the modified generalized acceleration [q.sub.mod]″ which is output by the acceleration conversion unit 30 is referred to as a measured acceleration [q.sub.m]″. Here, μ is a weight of a regularization term, [τ] is a joint torque, and [f.sub.c] is an external force. The joint torque [τ] is a vector in which a torque generated for each joint JT is used as an element.
[q]″=FD([q.sub.m],[q.sub.m]′,[τ],[f.sub.c]) (17)
(46)
(47)
(48) In the restrictions, {[n.sub.Z.sup.T].Math.[f.sub.c,j]>0} means that the vertical component (Z direction component) of the external force [f.sub.cj] is positive. Here, since an upward reaction force is set to be positive, this restriction means that the feet are not drawn by the floor or the ground.
(49) In the restrictions, {μ.Math.[n.sub.Z.sup.T].Math.[f.sub.c,j]−∥[f.sub.c,j]−[n.sub.Z.sup.T].Math.[f.sub.c,j].Math.[n.sub.Z]∥>0} will be described with reference to
(50) A portion of the restriction may be omitted. For example, in a case where the feet are assumed to be able to slip on the floor or the ground, the right term of the restriction may be omitted.
(51) Hereinafter, functions which are executed by the external force and joint torque estimation unit 50 under the definition as stated above will be described. The external force and joint torque estimation unit 50 may directly perform forward dynamics calculation shown in Expression (17), and can perform more rapid processing by applying inverse dynamics calculation to some portions as will be described below. This is because when the external force [f.sub.c] is determined, the joint torque [τ] can be calculated in inverse dynamics using the measured joint acceleration.
(52)
(53) The QP formalization unit 52 converts the forward dynamics calculation shown in Expression (17) into an objective function of a quadratic programming (QP) form shown in Expression (18). In the expression, a matrix <Q> is a square matrix having the same degree as the external force [fc], and [c] is a vector having the same degree as the external force [fc].
(54)
(55) The QP solver unit 54 calculates the external force [f.sub.c] by solving an objective function converted into a QP form by the QP formalization unit 52 using a sequential quadratic programming method or an adduction method.
(56) The inverse dynamics calculation unit 56 performs the inverse dynamics calculation shown in Expression (19) using the external force [f.sub.c] calculated by the QP solver unit 54, and calculates the joint torque [τ]. Here, [q] is a calculation position, and [q]′ is a calculation speed.
[τ]=ID([q],[q]′,[q.sub.m]″,[f.sub.c]) (19)
(57) According to the first embodiment described above, the acquisition unit (10) that acquires data for a dimension of position relating to an object ([q.sub.m]) represented in the generalized coordinate system and the conversion unit (30) that acquires at least data for a dimension of acceleration ([α.sub.k]) from a plurality of inertial sensors (JS-k) attached to the object and converts the acquired data of acceleration into data for a dimension of acceleration ([q.sub.m]″) represented in the generalized coordinate system on the basis of data for a dimension of position relating to the object represented in the generalized coordinate system are included, whereby, it is possible to obtain data for a dimension of acceleration having a further improvement in followability to a high-frequency operation than in a case where the data for a dimension of acceleration represented in the generalized coordinate system is obtained by the data for a dimension of position relating to the object ([q.sub.m]) represented by second-order differentiating the generalized coordinate system using a difference method.
(58) Hereinafter, the results of an experiment performed by the inventor of the present application will be described.
(59) According to the first embodiment, the estimation unit (50) that estimates at least one of an external force acting on an object and a torque generated in the joint on the basis of data for a dimension of position ([q.sub.m]) represented in the generalized coordinate system, data for a dimension of speed relating to the object ([q.sub.m]′) obtained by differentiating the data for a dimension of position, and data for a dimension of acceleration ([q.sub.m]″) converted by the conversion unit (30) is further included, whereby it is possible to obtain estimation results having an improvement in followability to a high-frequency operation.
Second Embodiment
(60) Hereinafter, a second embodiment will be described. The objective function in the first embodiment is represented by Expression (15), but an objective function in the second embodiment is represented by Expression (21). In the expression, [q.sub.mbase]″ is a measured acceleration of the base segment SG(B) (another example of data of acceleration represented in the generalized coordinate system), and [q.sub.base]″ is a calculated acceleration of the base segment SG(B). They are all vectors having the same elements as [q.sub.0] described in the first embodiment, that is, total seven elements in four elements of a quaternion indicating the X coordinate, Y coordinate, Z coordinate, and posture of the base segment SG(B). Restrictions are the same as those in the first embodiment.
(61)
(62) The acceleration conversion unit 30 of the second embodiment calculates a measured acceleration [q.sub.m]″ by causing a transformation matrix <T> obtained similarly to the first embodiment to act on an input vector [α.sub.k] and outputs the calculated measured acceleration to the external force and joint torque estimation unit 50. The external force and joint torque estimation unit 50 of the second embodiment performs calculation by extracting [q.sub.mbase]″ from the measured acceleration [q.sub.m]″.
(63) The external force and joint torque estimation unit 50 of the second embodiment performs inverse dynamics calculation shown in Expression (23), and calculates the calculated acceleration [q.sub.base]″ and the joint torque τ. Here, [q.sub.m#]″ is a vector in which [q.sub.mbase]″ is excluded from [q.sub.m]″. HD( ) is a calculation means in which forward dynamics calculation and inverse dynamics calculation are mixed with each other by switching between the forward dynamics calculation and the inverse dynamics calculation for each segment.
[τ],[q.sub.base]″=HD([q.sub.m],[q.sub.m]′,[q.sub.m#]″,[f.sub.c]) (23)
(64) By performing the above processing, it is possible to obtain the same results with a smaller number of calculations than in the first embodiment. Hereinafter, the results of an experiment performed by the inventor of the present application will be described.
(65)
(66) As understood when
(67) According to the second embodiment described above, it is possible to exhibit the same effect with a smaller number of calculations than in the first embodiment.
(68) In each embodiment described above, although the generation of data for a dimension of position relating to the target TGT represented in the generalized coordinate system on the basis of outputs of the IMU sensors and the acquisition of data for a dimension of position generated by the optical motion capture have been exemplified, data may be acquired from both of them. In this case, for example, a transformation matrix may be calculated on the basis of the data for a dimension of position generated by the optical motion capture, and be caused to act on data for a dimension of acceleration included in the outputs of the IMU sensors.
(69) While preferred embodiments of the invention have been described and illustrated above, it should be understood that these are exemplary of the invention and are not to be considered as limiting. Additions, omissions, substitutions, and other modifications can be made without departing from the spirit or scope of the present invention. Accordingly, the invention is not to be considered as being limited by the foregoing description, and is only limited by the scope of the appended claims.