SYSTEM AND METHOD FOR DETECTING HANDWRITING PROBLEMS

20230148911 · 2023-05-18

Assignee

Inventors

Cpc classification

International classification

Abstract

A system for detecting handwriting problems may include a handwriting instrument including a body extending longitudinally between a first end and a second end, the first end having a writing tip which is able to write on a support, the handwriting instrument further including at least one motion sensor configured to acquire data on the handwriting of the user when a user is using the handwriting instrument, and one calculating unit communicating with the motion sensor and configured to analyze the data by an artificial intelligence trained to detect whether the user has handwriting problems.

Claims

1. A system for detecting handwriting problems comprising: a handwriting instrument including a body extending longitudinally between a first end and a second end, the first end having a writing tip which is able to write on a support, the system further including at least one motion sensor configured to acquire data on the handwriting of the user when a user is using the handwriting instrument, one calculating unit communicating with the motion sensor and configured to analyze the data by an artificial intelligence trained to detect whether the user has handwriting problems, wherein the system further comprises a detection device that is distinct from the handwriting instrument.

2. A system according to claim 1, wherein the detection device comprises two motion sensors and the calculating unit.

3. A system according to claim 2, wherein the detection device further includes a short-range radio communication interface configured to communicate raw data acquired by the motion sensors and a battery.

4. A system according to claim 2, wherein the two motion sensors are three-axis accelerometers.

5. A system according to claim 2, wherein the two motion sensors are one three-axis accelerometer and one three-axis gyroscope

6. A system according to claim 5, wherein the three-axis gyroscope comprises a wake-up input suited for receiving a wake-up signal from the calculating unit when a movement is detected by the three-axis accelerometer, the three-axis gyroscope being configured for switching into an active state when the wake-up signal is received.

7. A system according to claim 1, further comprising a pressure sensor, wherein the calculating unit is configured to receive data acquired by the pressure sensor.

8. A system according to claim 1, further comprising a stroke sensor configured to acquire stroke data while the user is using the handwriting instrument, the artificial intelligence being further trained with the stroke data to determine handwriting problems.

9. A system according to claim 8, wherein the stroke sensor is the at least one motion sensor.

10. A system according to claim 1, wherein the detection device is mounted on the second end of the handwriting instrument.

11. A system according to claim 10, wherein the detection device comprises a body configured to be mounted on the second end of the handwriting instrument and a protuberant tip configured to be inserted in the body of the detection device.

12. A system according to claim 2, wherein one of the two motion sensors is provided on a protuberant tip of the detection device that is configured to be inserted in a body of the detection device and another of the two motion sensors is provided in the body of the detection device.

13. A system according to claim 2, wherein the two motion sensors are provided in a body of the detection device.

14. A system according to claim 1, wherein the artificial intelligence is further configured to determine when the user is actually using the handwriting instrument on the support and to differentiate data corresponding to an actual use of the handwriting instrument from data acquired while the handwriting instrument is just being hold in the air.

15. A system according to claim 1, wherein the handwriting instrument is a pen, a pencil a brush or any other element allowing a user to write or draw with it on the support.

16. A system according to claim 1, wherein the artificial intelligence is further configured to transcribe raw data acquired by the motion sensor into handwriting characters depicted on a mobile device.

17. A system according to claim 1, wherein the support is a non-electronic surface.

18. A system according to claim 1, wherein the calculating unit comprises a volatile memory to store data acquired by the at least one motion sensor and a non-volatile memory to store a model enabling the detection of handwriting problems.

19. A system according to claim 1, wherein the artificial intelligence is a neural network configured to be trained by end-to-end classification or by segmentation and classification of strokes.

20. A system according to claim 1, wherein the neural network is further configured to acquire data during the use of the handwriting instrument and to determine if the user is forming letters and numbers correctly.

Description

BRIEF DESCRIPTION OF DRAWINGS

[0076] Other features, details and advantages will be shown in the following detailed description and on the figures, on which:

[0077] FIG. 1 shows an illustration of a system for detecting handwriting problems according to a first embodiment.

[0078] FIG. 2 shows a block schema of the system illustrated in FIG. 1.

[0079] FIG. 3 shows an illustration of a system for detecting handwriting problems according to a second embodiment.

[0080] FIG. 4 shows a block schema of the system illustrated in FIG. 3.

[0081] FIG. 5 shows an illustration of a system for detecting handwriting problems according to a third embodiment.

[0082] FIG. 6 shows an illustration of a system for detecting handwriting problems according to a fourth embodiment.

[0083] FIG. 7 shows an illustration of a system for detecting handwriting problems according to an alternative embodiment of FIGS. 1 and 2.

[0084] FIG. 8 shows a block diagram illustrating the training phase of the neural network in a method using the systems disclosed herein.

[0085] FIG. 9 shows a block diagram illustrating the training phase of the neural network in a method using the systems disclosed herein.

[0086] FIG. 10A to 10C illustrate block diagrams of the collect phase, training phase and inference phase of the trained neural network.

DESCRIPTION OF EMBODIMENTS

[0087] Figures and the following detailed description contain, essentially, some exact elements. They can be used to enhance understanding the disclosure.

[0088] It is now referred to FIGS. 1 to 7 illustrating embodiments of a system 1 for detecting handwriting problems. The same reference numbers are used to describe identical elements of the system.

[0089] In an embodiment, a handwriting problem which can be detected according to the present disclosure can be dyslexia, dysgraphia or a difficulty to reproduce characters.

[0090] FIGS. 1 and 2 generally illustrates a system 1 according to a first embodiment. The system 1 comprises a handwriting instrument 2. The handwriting instrument 2 may be a pen, a pencil, a brush or any element allowing a user to write or draw with it on a support. The support may be paper, canvas, or any surface on which a user can write or draw. The support may also be a coloring book. The support may be a non-electronic surface.

[0091] The handwriting instrument 2 comprises a body 3 extending longitudinally between a first end 4 and a second end 5. The first end 4 comprises a writing tip 6 which is able to write on a support. The tip 6 may deliver ink or color.

[0092] The handwriting instrument 2 further includes at least one motion sensor 7. In one embodiment, the motion sensor 7 may be a three-axis accelerometer or a three-axis gyroscope.

[0093] In the illustrated embodiments on FIGS. 1 to 7, the handwriting instrument 2 includes two motion sensors 7. In embodiments, the handwriting instrument 2 comprises two three-axis accelerometers. In embodiments, the handwriting instrument 2 comprises one three-axis accelerometer and one three-axis gyroscope.

[0094] The at least one motion sensor 7 is able to acquire data on the handwriting of the user when the user is using the handwriting instrument 2. These data are communicated to a calculating unit 8 which is configured to analyze the data and detect an eventual handwriting problem of the user. The calculating unit 8 may comprise a volatile memory to store the data acquired by the motion sensor 7 and a non-volatile memory to store a model enabling the detection of handwriting problem.

[0095] The handwriting instrument 2 may also comprise a short-range radio communication interface 9 allowing the communication of data between the motion sensor 7 and the calculating unit 8. In embodiments, the short-range radio communication interface is using a Wi-Fi, Bluetooth®, LORA®, SigFox® or NBIoT network. In embodiments, the short-range radio communication interface may also communicate using a 2G, 3G, 4G or 5G network.

[0096] The handwriting instrument 2 further includes a battery 10 providing power to at least the motion sensor 7 when the user is using the handwriting instrument. The battery 9 may also provide power to the calculating unit 8 when the calculating unit is included in the writing instrument 2.

[0097] More specifically, in the embodiment of FIGS. 3 and 4, the handwriting instrument 2 comprises the at least one motion sensor 7, the short-range radio communication interface 9 and the battery 10. The system 1 further comprises a mobile device 11, distinct from the handwriting instrument 2. The mobile device 11 may be an electronic tablet, a mobile phone or a computer. The mobile device 11 comprises the calculating unit 8. The mobile device 11 further comprises a short-range radio communication interface 12 enabling communication between the calculating unit 8 and the handwriting instrument 2.

[0098] In this embodiment, the calculating device 8 of the mobile device receives raw data acquired by the motion sensor 7 and analyzes the raw data acquired by the motion sensor 7 to detect an eventual handwriting problem.

[0099] In the embodiment illustrated FIGS. 5 and 6, the motion sensors 7, the calculating unit 8, the short-range radio communication interface 9 and the battery 10, are not embedded in the handwriting instrument 2. In this embodiment, the electronics may be comprised in a detection device 13, distinct from the handwriting instrument 2. The detection device 13 can be mounted on the second end 5 of the handwriting instrument 2.

[0100] In this embodiment, the detection device 13 comprises a body 14 designed to be mounted on the second end 5 of the handwriting instrument 2 and a protuberant tip 15 able to be inserted in the body 3 of the handwriting instrument 2. In examples, one motion sensor 7 may be provided on the protuberant tip 15 and another motion sensor 7 may be provided in the body 14 of the detection device 13. By this means, the two motions sensors 7 are able to acquire different data during the handwriting of the user.

[0101] In embodiments, the motions sensors 7 are provided in the body 14 of the detection device 13. By this means, the detection device 13 can be mounted on any type of handwriting instrument 2, without necessitating a hollow body 3 of the handwriting instrument 2.

[0102] In the embodiment illustrated on FIG. 7, the at least one motion sensor 7, the calculating unit 8, the short-range radio communication interface 9 and the battery 10 are directly embedded in the handwriting instrument 2.

[0103] In embodiments, one motion sensor 7 may be provided close to the first end 4 of the handwriting instrument 2, while another motion sensor 7 may be provided on the second end 5 of the handwriting instrument 2.

[0104] In embodiments, the handwriting instrument 2 may also comprise a pressure sensor able to acquire data. These data can be transmitted to the calculation unit that analyze these data and the data acquired by the at least one motion sensor 7.

[0105] The pressure sensor may be embedded in the handwriting instrument 2 or in the detection device 13.

[0106] In all the embodiments described above, the calculating unit 8 receives data acquired from at least on motion sensor 7 and from the pressure sensor, if applicable, to analyze them and detect a handwriting problem.

[0107] More specifically, the calculating unit 8 may store an artificial intelligence model able to analyze the data acquired by the motion sensor 7. The artificial intelligence may comprise a trained neural network.

[0108] In the embodiment illustrated on FIG. 8, the neural network is trained according to the method of using intermediate features extraction.

[0109] More particularly, at step S1, the motion sensor 7 acquires data during the use of the handwriting instrument 2.

[0110] At step S2, the neural network receives the raw signals of the data acquired at step S1. The neural network also receives the sample labels at step S3. These labels correspond to whether or not the signal corresponds to a stroke. More precisely, the neural network is able to determine if the signal correspond to a stroke on a support. The neural network is then able to determine stroke timestamps.

[0111] More particularly, this means that the neural network is able to determine for each stroke timestamps if a stroke has actually been made on the support by the user during the use of the handwriting instrument 2.

[0112] At step S4, the calculating unit 8 performs a stroke features extraction to obtain intermediate features at step S5.

[0113] These intermediate features comprise, but are not limited to: [0114] total strokes duration, [0115] total in air stroke duration, [0116] strokes mean duration, [0117] strokes mean and peak velocity, [0118] number of pauses during use of the handwriting instrument, [0119] ballistic index, which corresponds to an indicator of handwriting fluency which measures smoothness of the movement defined by the ratio between the number of zero crossings in the acceleration and the number of zero crossings in the velocity, [0120] number of zero-crossing in the acceleration during strokes, [0121] number of zero-crossing in the velocity during strokes.

[0122] From these intermediate features, the neural network is able to derive indications about handwriting problems.

[0123] At step S6, an algorithm is able to derive indications about handwriting problems.

[0124] This algorithm can be a learned model such as a second neural network, or a handcrafted algorithm.

[0125] In the embodiment where a learned model such as a neural network is used, the model is trained on a supervised classification task, where the inputs are stroke features with labels, and the outputs are handwriting problems.

[0126] In the embodiment where a hand-crafted algorithm is used, the hand-crafted algorithm can compute statistics on the stroke features and compare them to thresholds found in the scientific literatures, in order to detect handwriting problems.

[0127] Finally, at step S7, the system is able to detect handwriting problems. These handwriting problems include but are not limited to: [0128] dyslexia, [0129] dysgraphia, [0130] wrong grip of the handwriting instrument, [0131] bad character writing.

[0132] In the embodiment illustrated on FIG. 9, the neural network is trained according to the method of end-to-end classification.

[0133] According to this embodiment, at step S10, the data are acquired by the motion sensor 7.

[0134] The classification is made in step S11. To do learn the classification task, the neural network receives the raw signal of the data acquired by the motion sensor 7 and global labels (step S12). The global labels corresponds to the handwriting problems to be detected by the neural network which can be, but are not limited to: [0135] dyslexia, [0136] dysgraphia, [0137] wrong grip of the handwriting instrument, [0138] bad character writing.

[0139] In step S13, the neural network delivers the result.

[0140] The trained neural network described in reference with FIGS. 8 and 9 is stored.

[0141] The neural network can be stored in the calculating unit 8.

[0142] FIGS. 10A to 100 illustrate more specifically the embodiment described with reference to FIG. 8.

[0143] In order of segment the strokes (step S2 of FIG. 8), the neural network may determine the timestamps of the strokes on the support.

[0144] This information can be detected by a stroke sensor 16. The stroke sensor 16 is advantageously embedded in the handwriting instrument or in the detection device 13 mounted on the handwriting instrument.

[0145] In embodiments, the stroke sensor 16 may be a pressure sensor, a contact sensor or a vibration sensor. Then, the neural network receives the data collected by the stroke sensor 16 at step S3.

[0146] In the embodiment illustrated FIGS. 10A to 100, the stroke sensor 16 is the motion sensor 7. More specifically, the motion sensor 7 is a three-axis accelerometer.

[0147] FIG. 10A illustrates the collect of data used during the training phase of the neural network, which is illustrated FIG. 10B. Finally, FIG. 10C illustrates the inference of the neural network by a user of the handwriting instrument.

[0148] To use the motion sensor 7 as the stroke sensor 16, the accelerometer first need to be set such that its sample rate is at least twice superior to the maximum frequency of the vibrations to be detected.

[0149] In examples, the accelerometer is highly sensitive. To allow detection of the vibrations by the accelerometer, the accelerometer may be bound to the writing tip 6 of the handwriting instrument 2 by rigid contacts with little damping.

[0150] In embodiments, it is possible to enhance the precision of the vibration detection by using a support presenting a rough surface with known spatial frequency.

[0151] In FIG. 10A, representing the collect phase, the accelerometer is set with a sample rate F2. While the user is using the handwriting instrument 2, the accelerometer acquires data at step S20. These data can be sent by short-range radio to a recording device at step S21.

[0152] In embodiments, during the collect phase, if the handwriting instrument 2 also comprises a three-axis gyroscope as another motion sensor 7, the three-axis gyroscope can also acquire data that are sent to the recording device at step S21.

[0153] FIG. 10B illustrates the training phase of the neural network.

[0154] At step S22, the data sent to the recording device are provided. The data are analyzed at step S23A to determine the labels (step S23B). For example, the labels comprise the strokes timestamps, detected when vibration is detected in the data, and the stroke velocity. The stroke velocity is advantageously determined using the acceleration data and the high frequencies contained in the vibration.

[0155] Step S24 comprises the undersampling of the data. Particularly, during the preceding steps, the frequency of the accelerometer was set to be higher than the one set for the inference phase. Moreover, the vibration analysis was made on the basis of the three-axis accelerometer and the three-axis gyroscope. However, the constant use of the gyroscope leads to high energy consumption.

[0156] The undersampling step S24 comprises the degradation of the parameters. Frequency F2 of the accelerometer is reduced to a frequency F1, smaller than F2, and the training is made only according to three-axis detection.

[0157] At step S25, the neural network is trained to be able to perform strokes segmentation, as described with reference to FIG. 8, step S2.

[0158] FIG. 10C illustrates the inference phase. In this phase, the neural network is trained to detect handwriting problems by means of strokes segmentation.

[0159] At step S26, a user is using the handwriting instrument 2 in view of detecting an eventual handwriting problem.

[0160] The accelerometer in the handwriting instrument is set to the frequency F1 and advantageously, the data are acquired according to three-axis.

[0161] At step S27, the trained neural network is feed with the acquired data. At step S28, the neural network is able to deliver the strokes timestamps and the velocity.

[0162] Finally, the neural network is able to perform the intermediate stroke feature extraction and the classification at step S29. Step S29 actually corresponds to steps S4 to S7, already described with reference to FIG. 8.

[0163] In embodiments, the neural network can be trained continuously with the data acquired by the user of the handwriting pen 2 after the storage of the neural network.

[0164] In embodiments, the neural network can also be trained to detect a wrong ductus of the user. The ductus corresponds to the formation of letter and number.

[0165] More specifically, the neural network is able to determine if a sequence of strokes correspond to a letter or a number.

[0166] To this end, the neural network can also be fed with a large data base of letters and numbers. Each letters and numbers can be associated with a sequence of strokes. The sequence of strokes can advantageously corresponds to acceleration signals acquired by the accelerometer during the collect phase when forming the letters and numbers.

[0167] The labels to be determined by the neural network may be the direction and an order of the sequence of strokes for each letter and number.

[0168] In step S5 of FIG. 8, the intermediate features can then also comprise the temporal sequence of strokes and their direction.

[0169] In step S7, the neural network is able to determine if the user is forming correctly letters and numbers.