SYSTEM FOR ASSISTING IN THE SIMULATION OF THE SWALLOWING OF A PATIENT AND ASSOCIATED METHOD
20220273228 · 2022-09-01
Inventors
Cpc classification
G16H20/70
PHYSICS
A61B7/008
HUMAN NECESSITIES
A61B5/7445
HUMAN NECESSITIES
A61B2562/0219
HUMAN NECESSITIES
International classification
A61B5/00
HUMAN NECESSITIES
G16H20/70
PHYSICS
Abstract
A system includes a device for detecting the swallowing of a patient including at least one sensor for detecting swallowing configured to measure a swallowing signal, a processor for processing the swallowing signal connected to the device for detecting swallowing and configured to characterize the swallowing signal. The system includes an augmented reality or virtual reality headset configured to display virtual content to the patient, a virtual content processor connected to the processor for processing the swallowing signal and to the augmented reality or virtual reality headset, the virtual content processor being configured to deliver the virtual content to the augmented reality or virtual reality headset and to adapt the virtual content delivered according to the swallowing signal received from the processor for processing the swallowing signal.
Claims
1. A system comprising: a device for detecting a swallowing of a patient comprising at least one sensor for detecting swallowing configured to measure a swallowing signal, a processor for processing the swallowing signal connected to the device for detecting swallowing, configured to characterise the swallowing signal, a virtual reality or augmented reality headset, configured to display virtual content to the patient, a virtual content processor connected to the processor for processing the swallowing signal and to the virtual reality or augmented reality headset, said virtual content processor being configured to deliver the virtual content to the virtual reality or augmented reality headset and to adapt the delivered virtual content as a function of the swallowing signal received from the processor for processing the swallowing signal.
2. The system according to claim 1, wherein the sensor for detecting swallowing is a microphone for detecting a swallowing sound or an accelerometer for detecting a swallowing movement.
3. The system according to claim 1, wherein the device for detecting swallowing further comprises at least one sensor among heart rate, body temperature, sweating, breathing sound respiratory rate, muscular activity sensors.
4. The system according to claim 1, wherein the characterisation of the swallowing signal by the processor for processing the swallowing signal comprises a classification of the swallowing signal, wherein the processor for processing the swallowing signal is further configured to send to the virtual content processor the class in which the swallowing signal has been classified and wherein the adaptation of the virtual content by the virtual content processor is realised as a function of the class received.
5. The system according to claim 1, wherein the virtual content comprises a food component and wherein the swallowing signal is classified in a class corresponding to correct swallowing or in a class corresponding to incorrect swallowing and wherein: if the class received by the virtual content processor is a class corresponding to correct swallowing, the virtual content processor is configured to adapt the virtual content delivered to the augmented reality or virtual reality headset by increasing a size and/or a texture level of the food component comprised in the delivered virtual content and by sending the adapted virtual content to the virtual reality or augmented reality headset; if the class received by the virtual content processor is a class corresponding to incorrect swallowing, the virtual content processor is configured to adapt the virtual content delivered to the augmented reality or virtual reality headset by decreasing the size and/or the texture level of the food component comprised in the delivered virtual content and by sending the adapted virtual content to the virtual reality or augmented reality headset;
6. The system according to claim 1, wherein, if the virtual content processor is configured in a rehabilitation mode and if the class received by the virtual content processor is a class corresponding to incorrect swallowing, the virtual content processor is configured to not carry out the adaptation of the virtual content and to deliver the same virtual content to the virtual reality or augmented reality headset.
7. A method for assisting in the simulation of a swallowing of a patient, the method comprising: sending a virtual content by a virtual content processor to a virtual reality or augmented reality headset; displaying the virtual content by the virtual reality or augmented reality headset; measuring at least one swallowing signal by a device for detecting swallowing; sending the swallowing signal by the device for detecting swallowing to a processor for processing the swallowing signal; classifying the swallowing signal by the processor for processing the swallowing signal; sending, by the processor for processing the swallowing signal, to the virtual content processor, the class in which the swallowing signal has been classified; adapting, by the virtual content processors, of the virtual content delivered to the augmented reality or virtual reality headset as a function of the class received.
8. The method for assisting in the simulation of the swallowing of a patient according to claim 7, wherein the virtual content comprises a food component, wherein, at the classification step, the swallowing signal is classified in a class corresponding to correct swallowing or in a class corresponding to incorrect swallowing and wherein the adaptation of the virtual content by the virtual content processor comprises the following sub-steps: if the class received by the virtual content processor is a class corresponding to correct swallowing: a sub-step of increasing, by the virtual content processor, the virtual content delivered to the augmented reality or virtual reality headset by increasing a size and/or a texture level of the food component comprised in the delivered virtual content and by sending the adapted virtual content to the virtual reality or augmented reality headset, if the class received by the virtual content processor is a class corresponding to incorrect swallowing: a sub-step of decreasing, by the virtual content processor, the virtual content delivered to the augmented reality or virtual reality headset by decreasing the size and/or the texture level of the food component comprised in the delivered virtual content and by sending the adapted virtual content to the virtual reality or augmented reality headset.
9. The method for assisting in the simulation of the swallowing of a patient according to claim 8, wherein, if the virtual content processor is configured in a rehabilitation mode and if the class received by the virtual content processor is a class corresponding to incorrect swallowing, the adaptation of the virtual content is not carried out and the same virtual content is delivered to the virtual reality or augmented reality headset.
Description
BRIEF DESCRIPTION OF THE FIGURES
[0054] The figures are presented for indicative purposes and in no way limit the invention.
[0055]
[0056]
[0057]
[0058]
DETAILED DESCRIPTION
[0059] The figures are presented for indicative purposes and in no way limit the invention.
[0060] Unless stated otherwise, a same element appearing in the different figures has a single reference.
[0061]
[0062] As represented in
[0063] The system 1 for assisting in the simulation of swallowing of a patient according to the invention comprises the virtual reality or augmented reality headset 11, the device for detecting swallowing 12, the processor for processing the swallowing signal 13 and the virtual content processor 14.
[0064] The system 1 is “non-invasive” in that it makes it possible to carry out swallowing exercises without food enticement, that is to say without having to swallow boluses.
[0065] Virtual reality consists in immersing a user of a virtual reality headset in a virtual environment. To do so, the virtual reality headset uses stereoscopy, creating a three-dimensional environment in which the user of the virtual reality headset can move about. A virtual reality headset displays virtual content in three dimensions, in a stereoscopic manner, for example by using two screens, one for each eye of the user, as implemented by the “Oculus Rift®” or the “HTC Vive®” virtual reality headsets, or for example on a screen divided into two parts, one part for each eye of the patient 10, as proposed by the “Samsung Gear VR®” virtual reality headset. Virtual reality headsets may be associated with virtual joysticks to enable the user to interact with the virtual environment created.
[0066] Augmented reality consists in superimposing virtual elements on the real environment of a user of an augmented reality headset. To do so, the augmented reality headset takes one or more images of the real environment of the user, for example using one or more cameras situated on the augmented reality headset, to recreate digitally the real environment of the user. Next, the augmented reality headset superimposes on the images taken a virtual content in two or three dimensions with which the user of the augmented reality headset can interact. Certain augmented reality headsets display on two screens, one for each eye, the images of the real environment taken by the cameras of the headset as well as the virtual content superimposed on the real environment. The most recent augmented reality headsets, such as the “Microsoft HoloLens®” or smart glasses type headsets, only display the virtual content to superimpose on “waveguide” type displays, thus displaying the virtual content on the real environment without retransmitting the real environment on a screen. Indeed, “waveguide” type displays are transparent, the user thus being able to see the real environment through these displays. In such types of augmented reality headsets, the cameras are still present to calculate the position of the virtual content to superimpose compared to the real environment. The augmented reality headsets may be associated with joysticks to interact with the virtual content, and/or to detect movements of the arms and hands of the user to interact in a more natural manner with the virtual content.
[0067] The virtual reality or augmented reality headset 11 is a headset configured to display a virtual content to the patient 10. This virtual content may be superimposed on the real environment when the headset 11 used is an augmented reality headset, or then this virtual content may be comprised in the virtual environment created when the headset 11 used is a virtual reality headset.
[0068]
[0069] In
[0070] The virtual reality or augmented reality headset 11 comprises a display device 111, an audio content streaming device 112 and a processor 113.
[0071] The display device 111 enables the display of the virtual video content to the user of the headset 11 and may comprise two screens, one for each eye, to produce a stereoscopic display. These two screens may be liquid crystal display (LCD) screens, or “waveguide” type screens as described previously. The display device 111 may comprise only one screen divided into two parts, one for each eye.
[0072] The audio content streaming device 112 enables the streaming of audio content to the user, in relation with the virtual content video displayed to the user of the headset 11 by the display device 111. The audio content streaming device 112 may comprise one or more loudspeakers, one or more headphones, or any other type of device enabling audio streaming. The virtual reality or augmented reality headset 11 may not comprise an audio content device 112.
[0073] The processor 113 of the virtual reality or augmented reality headset is configured to produce an image displayable on the display device 112, to superimpose virtual content on a real or virtual environment and to receive a virtual content and/or a command comprising an indication of a virtual content to display from the virtual content processor 14. To do so, the virtual reality or augmented reality headset 11 via its processor 113 and the virtual content processor 14 are connected. This interfacing may be wired or wireless.
[0074] The device for detecting swallowing 12 of the system 1 according to the invention comprises at least one sensor for detecting swallowing 121 configured to measure a swallowing signal of the patient 10. This sensor for detecting swallowing 121 may for example be an accelerometer situated at the level of the larynx. It can then measure a swallowing signal of the patient 10, for example a signal of laryngeal movement corresponding to swallowing or any other movement making it possible to characterise swallowing. The sensor for detecting swallowing 121 may for example be a microphone, the swallowing signal of the patient 10 measured then being a laryngeal sound, or any other sound making it possible to characterise swallowing. The sensor for detecting swallowing 121 may be any sensor capable of measuring a swallowing signal making it possible to characterise swallowing of the patient 10. Further, the device for detecting swallowing 12 may comprise a plurality of sensors for detecting swallowing 121, for example a combination of a microphone and an accelerometer in order to improve the precision and the reliability of swallowing detection.
[0075] The device for detecting swallowing 12 may for example be a collar device for detecting swallowing, as represented in
[0076] Further, the device for detecting swallowing 12 may comprise at least one sensor among a heart rate sensor 122, a body temperature sensor 123, a sweating sensor 124, a breathing sound sensor 125, a respiratory rate sensor 126, a muscular activity sensor (not represented). The device for detecting swallowing 12 represented in
[0077] The device for detecting swallowing 12 further comprises a processor 127, configured to receive data coming from the sensors 121 to 126 and to transmit said data to the processor for processing the swallowing signal 13 with which it is interfaced.
[0078] The processor for processing the swallowing signal 13 is configured to process the swallowing signal, that is to say a data exchange A represented in
[0079] Once classified, the signal received from the device for detecting swallowing 12 by the processor for processing the swallowing signal 13 is sent, with its classification, by the processor for processing the swallowing signal 13 to the virtual content processor 14 in a data exchange B represented in
[0080] The virtual content processor 14 connected to the processor for processing the swallowing signal 13 and to the virtual reality or augmented reality headset 11 is configured to deliver a virtual content to the virtual reality or augmented reality headset 11 and to adapt the delivered virtual content as a function of the swallowing signals received from the processor for processing the swallowing signal 13. Thus, in the data exchange B represented in
[0081]
[0082] In
[0083] As represented in
[0084] The virtual content 21 is delivered to the virtual reality or augmented reality headset 11 by the virtual content processor 14 in a data exchange C represented in
[0085] On reception of the classification of the swallowing signal only or the classification and the swallowing signal, the virtual content processor 14, knowing the size and the texture of the virtual food component delivered previously, can then adapt the virtual content delivered to the virtual reality or augmented reality headset 11 on the basis of the classification of the swallowing signal received. For example, if the virtual content processor 14 receives a classification of the swallowing signal corresponding to “correct” swallowing, then the virtual content processor 14 adapts the virtual content 21 delivered to the virtual reality or augmented reality headset 11, for example by increasing the size and/or the texture level of the food component 21. It then delivers a new virtual content comprising a suitable food component 21, so that the system 1 determines the swallowing response of the patient 10 to this new food component 21, more difficult to swallow. If the virtual content processor 14 receives a classification of the swallowing signal corresponding to “incorrect” swallowing, then the virtual content processor 14 adapts the virtual content 21 delivered to the virtual reality or augmented reality headset 11, for example by decreasing the size and/or the texture level of the food component 21, or by re-proposing the same food component 21 to analyse if the swallowing response to the preceding proposition was a one-off error. To know if it is necessary to decrease the size and/or the texture level or re-propose the same food component 21, the virtual content processor 14 may comprise modes, for example an “examination” mode corresponding to the examination of swallowing, in which the size and/or the texture level are decreased, and a “rehabilitation” mode, in which the same virtual food component 21 is re-proposed to the patient 10 until said patient successfully manages correct swallowing of this virtual food component 21. Thus, it is possible to examine the dysphagia level of the patient 10 automatically, the “threshold” size and texture of the food component 21 beyond which the patient mainly realises incorrect swallowing corresponding to a determined dysphagia level, in the “examination” mode. It is also possible to analyse the evolution and the progression of the patient 10 in his rehabilitation exercises, in “rehabilitation mode”. The mode of the virtual content processor 14 may be modified by the reception of a change of mode command, sent for example by the practitioner or the patient 10 himself, for example via a computer or any other electronic device connected to the virtual content processor 14.
[0086] Further, the virtual content processor 14 can adapt the virtual content 20 that it delivers to the virtual reality or augmented reality headset 11 on reception of a command to adapt the virtual content. This command may for example be received via a communication network to which the virtual content processor is connected. A practitioner or the patient 10 himself may be the originator of this command, for example by sending it from a computer or any other electronic device connected to the communication network or directly connected to the virtual content processor 14. This command may contain an indication on the size of the virtual food component 21 to deliver to the virtual reality or augmented reality headset 11, on its texture level, on a combination of the size and the texture level of the virtual food component 21 or on the type of virtual food component 21. This indication may be a precise value of the size or texture level of the virtual food component 21 to deliver, or an indication of the size or of texture level that is larger, smaller, or equal to the size and/or to the texture of the virtual food component 21 delivered previously.
[0087] The food component 21 proposed to the patient 10 being virtual, the patient 10 is not tired physically by the swallowing examination and/or the rehabilitation exercises that he carries out, notably by using the fact that certain swallowing phases are reflex phases, that it is not possible to control for the patient 10, and which are triggered following the oral phase of swallowing, which is a voluntary phase. Thus, when the patient 10 puts to his mouth the virtual food component 21 of determined texture and size, and that he can see, he carries out the oral phase voluntarily and the following swallowing phases in a reflex manner. Thus, this allows the patient better simulation of swallowing without having to swallow multiple boluses of different sizes and textures and makes it possible to lower the costs of such exercises. Further, this allows the patient 10 to reduce the impact of the stress linked to these exercises on the result of these exercises, notably by putting him in favourable conditions thanks to a virtual environment and to the absence of real foods.
[0088] The virtual content processor 14 can further deliver a virtual content comprising several food components 21 to the virtual reality or augmented reality headset 11, in order to leave the choice to the patient 10 of the food component(s) 21 that he wishes to swallow.
[0089] In another embodiment, the virtual content displayed to the patient may not comprise any food component 21 but may entice the patient to carry out manoeuvres or to adopt positions that facilitate swallowing. These manoeuvres or positions may for example be of “effortful swallow”, “chin tuck” or “supraglottic swallow” type known to those skilled in the art. These manoeuvres may be adapted as a function of the signals received, for example by modifying the technique to perform or by proposing another technique to perform if the preceding technique has indeed been carried out.
[0090] In an alternative embodiment, the virtual content displayed to the patient 10 by the virtual reality or augmented reality headset 11 is a video game. Thus, the system 1 according to the invention may use the swallowing signals, notably the reflex phases of swallowing, to adapt the content of the video game as a function of the measured swallowing signals. For example, during frequent swallowings measured by the device for detecting swallowing 12, the processor for processing the swallowing signal 13 can classify these swallowing signals in a “stress” or “serene” class and transmit this classification as well as the swallowing signals to the virtual content processor 14, that is going to adapt the content of the video game 5 to the state of the user of the virtual reality or augmented reality headset 11 and the device for detecting swallowing 12. For example, during the detection of a state of stress of the player thanks notably to the swallowing signals, the virtual content processor 14 can adapt the game by proposing a more distressing or less distressing content as a function of the desired effect on the player. The virtual content of the video game 10 delivered by the virtual content processor 14 may comprise a food component 21.
[0091] The system 1 according to the invention may also be used for diet-linked disorders. For example, the system 1 may display different types of food components 21 to the patient 10 and analyse their attractiveness by analysing the swallowing of the patient 10 on visualising these virtual food components 21 thanks to the device for detecting swallowing 12. When an attractiveness is detected for a certain type of food that the patient 10 no longer wishes to consume or which he must no longer consume, the virtual content processor 10 can adapt the virtual content delivered to the virtual reality or augmented reality headset 11 in order to propose a negative experience in relation with this food component 21 and thus decrease its attractiveness.
[0092]
[0093] The method 40 for assisting in the simulation of the swallowing of a patient 10 according to the invention is implemented by the system 1 according to the invention and comprises a first step 41 of sending a virtual content 21 by the virtual content processor to a virtual reality or augmented reality headset 11 in a data exchange C represented in
[0098] Further, the step 47 of adaptation of the virtual content of the method 40 may not be carried out if the virtual content processor 14 is configured in a “rehabilitation” mode and if the class received by the virtual content processor 14 is a class corresponding to incorrect swallowing, the same virtual content 21 then being delivered to the virtual reality or augmented reality headset 11.