A HUMAN INTENTION DETECTION SYSTEM FOR MOTION ASSISTANCE
20200170547 · 2020-06-04
Inventors
Cpc classification
B25J9/1694
PERFORMING OPERATIONS; TRANSPORTING
A61B5/1107
HUMAN NECESSITIES
B25J9/1615
PERFORMING OPERATIONS; TRANSPORTING
G05B2219/40413
PHYSICS
International classification
A61B5/11
HUMAN NECESSITIES
A61B5/00
HUMAN NECESSITIES
Abstract
A device and method for human intention detection Sensor Band (HID). In preferred embodiments, it makes use of an array of force sensing resistors (FSRs) which are embedded inside a flexible band, which is capable of reading the muscle activity for different motion type and muscle forcein a human user. In one implementation of the invention two of such bands are attached to the forearm and the upper arm. From the readings of the sensors, the patterns for motion type and muscle force are then distinguished autonomously by machine learning, a Support Vector Machine (SVM) algorithm, or a neural network. The method is advantageous e.g. the detection of dexterous motion of the arms, upon which assistive exoskeleton can be controlled for motion assistance. The invention can also be applicable to hand gestures recognition and bilateral rehabilitation, besides this the invention can be used to control lower body exoskeleton as well.
Claims
1. A human intention detection device configured to detect an intended motion of a human user, and to generate an output the method comprising: a first force sensing device configured to mount around an upper part of a limb part of the human user so as to allow sensing of muscle contraction, wherein the first force sensing device comprises a plurality of force sensors spatially distributed to allow detection of different muscle parts of the human user's upper limb part and to generate outputs accordingly, and a processor device configured to receive said outputs from the force sensors of the first force sensing device, wherein the processor device comprises a processor configured to execute a detection algorithm in response to said outputs from the force sensors of the first force sensing device, and wherein the processor device is configured to output in real-time an intended motion and an intended force according to an output from the detection algorithm.
2-39. (canceled)
40. The human intention detection device according to claim 1, comprising a second force sensing device configured to mount around a lower part of the same limb as said first force sensing device, wherein the second force sensing device comprises a plurality of force sensors spatially distributed to allow detection of different muscle parts of the human user's lower part of the limb, and to generate outputs accordingly and wherein the processor device is configured to receive outputs from the force sensors of the second force sensing device and to output in real-time the intended motion and the intended force in response to outputs from the force sensors of the first and second force sensing devices.
41. The human intention detection device according to claim 1, wherein the first force sensing device is configured to mount on the upper arm of the human user.
42. The human intention detection device according to claim 40, wherein the first force sensing device is configured to mount on the upper arm of the human user and the second force sensing device is configured to mount on the forearm of the human user.
43. The human intention detection device according to claim 1, configured to discriminate between a plurality of levels of said muscle contraction activity.
44. The human intention detection device according to claim 1, wherein the force sensing device comprises a strap with the plurality of force sensors arranged on a line, on the side of the strap, which is configured to face the human user's upper limb part.
45. The human intention detection device according to claim 1, wherein the plurality of force sensors are Force Sensitive Resistor type sensors.
46. The human intention detection device according to claim 1, wherein the detection algorithm implements a support machine vector (SVM) or a neural network (NN) for classifying the intended motion and the intended force.
47. The human intention detection device according to claim 46, wherein the intended motion and the intended force are classified by computing required features through data fusion and raw sensor values.
48. The human intention detection device according to claim 1, wherein the detection algorithm comprises a training session, wherein the human user performs a plurality of intended motions for generating test data, wherein the detection algorithm further comprises computing accuracy in response to the test data, and wherein the detection algorithm also comprises selecting features, which can be obtained from both data fusion and original signals from FSRs for use in outputting said real-time intended motion and intended force based on the training sessions results.
49. The human intention detection device according to claim 1, wherein information obtained from force reading and data fusion, is used to detect the intended motion and the intended force.
50. The human intention detection device according to claim 1, wherein the first device comprises a strap or a sleeve configured to mount around an upper arm of the human user, wherein a plurality of force sensing devices are spatially distributed on the strap or sleeve so as to allow detection of contraction at a plurality of positions along biceps brachii and/or brachialis when the strap or sleeve is mounted on the upper arm of the human user.
51. The human intention detection device according to claim 1, comprising 5-8 FSRs spatially distributed to cover at least three different positions along a length of the biceps brachii, as well as, at least two different positions perpendicular to the length of the biceps brachii.
52. The human intention detection device according to claim 51, wherein the detection algorithm is configured to output one or more intended motions selected from: flexion, extension, pronation or supination, in response to outputs from the plurality of force sensing devices.
53. The human intention detection device according to claim 1, wherein the first force sensing device comprises a strap or sleeve on which the plurality of force sensors are mounted at different positions.
54. A method for detecting an intended human motion, the method comprising: sensing a muscle contraction activity with a plurality of force sensors arranged on an upper part of a limb, wherein said sensors are spatially distributed to allow detection of different muscle parts of the human user's upper part of said limb, executing a detection algorithm on a processor in response to outputs from the force sensors, and outputting in real-time an intended motion and an intended force according to an output from the detection algorithm.
55. A computer executable program code arranged to perform the method according to claim 54, when executed on a processor.
56. A method of using the device of claim 1 for controlling a robotic arm comprising an actuator, which is configured to be worn by a human user comprising providing the human intention detection device of claim 1 to a human user, wherein the human intention detection device is integrated to control said robotic arm comprising an actuator, which is configured to be worn by the human user.
57. The method according to claim 56, wherein said robotic arm comprising an actuator, which is configured to be worn by the human user is further configured for controlling an actuator in a virtual reality and/or gaming setup.
58. A system comprising a human intention detection device according to claim 1, and an actuator device configured to control said actuator device in response to the output from the human intention detection device.
Description
BRIEF DESCRIPTION OF THE FIGURES
[0044] The invention will now be described in more detail with regard to the accompanying Figures of which
[0045]
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057] The Figures illustrate specific ways of implementing the present invention and are not to be construed as being limiting to other possible embodiments falling within the scope of the attached claim set.
DETAILED DESCRIPTION OF EMBODIMENTS
[0058] The proposed HID system to detect the human intention is comprised of the following modules: [0059] 1) Sensor Bands [0060] 2) Electronics [0061] 3) Machine learning algorithm
[0062] 1) Sensor bands are used to read the muscle activity during different movement and muscle force or effort. Each is comprised of a flexible strap with an array of N (5N10) force sensing resistors (FSRs, e.g. Interlink 402), as illustrated in
[0063] The sensor bands can be mounted on arm in different ways: [0064] a) One on upper arm and one on forearm (
[0066] The size of the sensor band is adjustable for different users. Moreover, sensor bands can also be placed either on skin or on clothes.
[0067] 2) Electronics is mainly comprised of non-inverting amplifier (eq. 1) and a low pass filter (
V.sub.in(Input) and R.sub.ref both sets the sensing range of the FSR. There is a tradeoff between both values. If V.sub.in is set to a high value then R.sub.ref is set to low and vice versa in order to make use of the maximum range of the FSR. V.sub.in is set to a value of 1.2V for each amplifier and R.sub.ref is set to a different value for each sensor. This gives a unique advantage of detecting the muscle force with ease. The sensors with high R.sub.ref provide a clear distinction between low and medium level of muscle force, while, sensors with low value of R.sub.ref are able to distinguish between medium and high muscle force. The low pass filter is designed at the cut off frequency of 150 Hz in order to eliminate the high frequency noise.
[0068] 3) A machine learning algorithm is used to intelligently distinguish the patterns register by sensor bands for motion type and muscle force (
[0069] The flow chart of the algorithm is shown in
Control of Exoskeleton Through HID
[0070] The invention is developed for an upper body exoskeleton (
Possible Applications of HID Sensor
1. Bilateral Rehabilitation:
[0071] The idea is to control the motion of impaired arm by wearing the sensors on the healthy arm. This approach can serve the following two purposes: [0072] The user uses it to accomplish the daily routine tasks that need coordinated motion of both arms e.g. lifting/pulling an object etc. [0073] The user uses it for therapy exercises, which is aimed to bring the movement of impaired arm alive again up to some level.
2. Virtual reality:
[0074] HID can be used to control the device at remote location or in the virtual environment. In this way, the user wearing HID sensors as the controller, with his/her motion reproduced on the remote device or virtual device.
3. Control of Shoulder Joint:
[0075] The HID can directly detect and control the assistive motion at elbow joint. In certain scenarios the sensor information may be used to support the shoulder complex motion as well.
[0076]
[0077] In the shown example in
[0078]
[0079]
[0080] The shown shoulder joint comprises a spherical joint mechanism comprising two revolute joints joined by a double parallelogram linkage, wherein the double parallelogram linkage comprises a first linkage part hingedly connected to a first revolute joint at a distal end of the first linkage part and a second linkage part hingedly connected to a second revolute joint at a distal end of the second linkage part, [0081] the first linkage part comprises a first link arm and a second link arm, which first and second link arms are arranged to move parallel to each other, [0082] the second linkage part comprises a third link arm and a fourth link arm, which third and fourth link arms are arranged to move parallel to each other, and [0083] a proximate end of the first linkage part and a proximate end of the second linkage part are mutually hingedly connected.
[0084]
[0085] As seen, the preferred positions of the FSRs are at three different groups in a length direction of the biceps: [0086] two FSRs on an upper portion of the biceps brachii, one on each side, spaced apart by 2-10 cm, [0087] two FSRs on a lower portion of the biceps brachii, one on each side and spaced apart by 6-14 cm, and [0088] two FSRs at different positions in the middle of the biceps brachii, positions at different directions length directions between the two upper and the two lower FSRs.
[0089] It is to be understood that more than 6 FSRs can be used to cover yet more positions, if preferred. However, the 6 described positions have proven sufficient to provide reliable detection of all of flexion, extension, pronation and supination motions.
[0090] The muscles of the forearm perform many different types of motions, i.e. open and close of fist, wrist flexion/extension and many others, besides pronation/supination. If only pronation/supination and elbow's flexion/extension is of interest, then this can be achieved alone by the described upper arm sensor device. The muscles at the upper arm are majorly involved in elbow flexion/extension and forearm pronation/supination. They are not involved in the motion performed at wrist joint. Hence, there will be less disturbance on upper arm muscles by the wrist motions, and the results can be better if there is focus on upper arm muscles only.
[0091] However, it is to be understood that this upper arm embodiment could also be used in combination with a forearm force sensing device with one or more FSRs, if additional motions should be detected. I.e. if detection of motions at wrist joint is of interest, it can be done by a forearm sensing device with one or more FSRs in combination with the above described upper arm sensing device.
[0092] The FSRs may specifically be such as the Interlink 402, however it is to be understood that other FSRs may be used as well. Further, other types of force sensors may be used than FSRs.
[0093] The strap or sleave may be arranged to tighten around the upper arm to provide a proper fit, and/or it may be formed by an elastic material, e.g. an elastic garment that will ensure a proper fit. Especially, the strap or sleave may be made of an elastic material, e.g. an elastic garment, of a predetermined circumference and arranged for mounting by the human user pulling the strap or sleave up to the forearm and turning it to provide proper positions of the FSRs.
[0094] The detection algorithm design to be used with the above described upper arm sensor embodiment is similar to what has already been described.
[0095] In the following, preferred methods for control of shoulder and elbow assistance level for exoskeletons will be described. Effort level estimated for elbow joint can be used to estimate the assistance required at the shoulder joint.
[0096] Effort Level estimation for elbow joint can be estimated by considering both the muscle contraction forces (MCF), measured by FSRs embedded in an upper arm sensor device, and by measuring elbow joint angle. It is known that muscle contraction and stiffness is directly proportional to weight of an object in hand. The algorithm utilizes the foretold information in the following way to compute the effort level. The first part is the training session, which comprises the following steps: [0097] 1) Two regression models (RM.sup.5 and RM.sup.0 are developed, that relates the MCF to elbow joint angle. [0098] 2) First regression model RM.sup.5 maps the MCF to complete range of motion of elbow joint for a 5 kg load. [0099] 3) The second one RM.sup.0 relates the same parameters without having any load.
[0100] In the testing part, the algorithm first estimates that for a given joint angle what would be the MCFs (F.sup.5, F.sup.0 ) for the case of e.g. a 5 kg and a 0 kg load using the regression models developed in the training session. In the next step, the distance relation is utilized to map the actual value of MCF (F.sup.a), measured at the current stage, in between F.sup.5and F.sup.0 in order to estimate the effort level. The equation to measure the effort level is:
EL=(1(F.sup.5()F.sup.a)/(F.sup.5()F.sup.0()))*E.sub.range
Here F.sup.5() and F.sup.0() represent the forces F.sup.5 and F.sup.0 as a function of elbow joint angle related through regression models RM.sup.5 and RM.sup.0 respectively. E.sub.range represents the range of effort level, which in the specific case is 5.
[0101] Assistance at shoulder joint may be computed by updating the gravity compensation torque model for shoulder joint, which is given by:
=g(m.sub.e, .sub.e, m.sub.s, .sub.s)
Here m.sub.e and .sub.e represent the mass and joint angle of elbow joint, and m.sub.s, .sub.s represent the mass and joint angle of shoulder joint, respectively.
[0102] As, it is clear that EL estimation model maps 0 kg load to EL=0, and 5 kg load to EL=5, so EL is basically representing the amount of load the human is carrying. Hence, the estimated EL is used to update the mass parameters, m.sub.e, of elbow joint of the exoskeleton, which ultimately updates the gravity torque for the shoulder joint for the given joint angles for both elbow and shoulder. This is how assistance may be provided to the shoulder point.
[0103] To sum up: the invention provides a novel device and method for human intention detection (HID). In preferred embodiments, it makes use of an array of force sensing resistors (FSRs) which are embedded inside a flexible band, which is capable of reading the muscle activity for different motion type and muscle force in a human user. In one implementation of the invention two of such bands are attached to the forearm and the upper arm. From the readings of the sensors, the patterns for motion type and muscle force are then distinguished autonomously by machine learning, e.g. a Support Vector Machine (SVM) algorithm, neural networks (NN). The method is advantageous e.g. the detection of dexterous motion of the arms, upon which assistive exoskeleton can be controlled for motion assistance. The invention can also be applicable to hand gestures recognition and bilateral rehabilitation, besides this the invention can be used to control lower body exoskeleton as well.
[0104] Although the present invention has been described in connection with the specified embodiments, it should not be construed as being in any way limited to the presented examples. The scope of the present invention is to be interpreted in the light of the accompanying claim set. In the context of the claims, the terms including or includes do not exclude other possible elements or steps. Also, the mentioning of references such as a or an etc. should not be construed as excluding a plurality. The use of reference signs in the claims with respect to elements indicated in the FIGS. shall also not be construed as limiting the scope of the invention. Furthermore, individual features mentioned in different claims, may possibly be advantageously combined, and the mentioning of these features in different claims does not exclude that a combination of features is not possible and advantageous.