DEVICE FOR MONITORING THE HEALTH OF A PERSON
20200155033 ยท 2020-05-21
Inventors
Cpc classification
A61B5/091
HUMAN NECESSITIES
A63B2220/833
HUMAN NECESSITIES
A63B2225/50
HUMAN NECESSITIES
A61B5/0816
HUMAN NECESSITIES
A63B2225/20
HUMAN NECESSITIES
A61B2562/0219
HUMAN NECESSITIES
A61B5/0876
HUMAN NECESSITIES
A63B71/0622
HUMAN NECESSITIES
International classification
A61B5/08
HUMAN NECESSITIES
A61B5/091
HUMAN NECESSITIES
Abstract
A first aspect of the invention provides a device for monitoring the health of a person, comprising: a frame configured to be worn within the mouth of the person; a microphone, mounted to or within the frame, for measuring sound data; and an accelerometer, mounted to or within the frame, for measuring acceleration data; wherein the sound data is for determining breathing data relating to the person's breathing and the acceleration data is for determining impact data relating to an impact experienced by the person.
Claims
1. A device for monitoring the health of a person, comprising: a frame configured to be worn within the mouth of the person; a microphone, mounted to or within the frame, for measuring sound data; and an accelerometer, mounted to or within the frame, for measuring acceleration data, wherein the sound data is for determining breathing data relating to the person's breathing and the acceleration data is for determining impact data relating to an impact experienced by the person.
2. The device of claim 1, further comprising: a processor, mounted to or within the frame, for processing the sound data and the acceleration data.
3. The device of claim 2, wherein the processor is configured to determine the breathing data relating to the person's breathing based on the sound data.
4. The device of claim 3, wherein the breathing data comprises breathing rate of the person and/or breathing volume of the person.
5. The device of claim 3, wherein the breathing data is time series data.
6. The device of claim 2, wherein the processor is configured to determine the impact data relating to a physical impact experienced by the person based on the acceleration data.
7. The device of claim 7, wherein the impact data comprises the amplitude of acceleration of the person and/or the direction of acceleration of the person.
8. The device of claim 6, wherein the impact data is time series data.
9. The device of claim 6, wherein the processor is configured to compare the breathing data and the impact data.
10. The device of claim 2, wherein the processor is configured to determine the occurrence of abnormal breathing or the cessation of breathing caused by an impact to the person.
11. The device of claim 1, wherein the frame is configured to surround one or more of the person's teeth.
12. The device of claim 11, wherein the frame is configured to fit conformally with the teeth of the person.
13. The device of claim 1, wherein the frame is formed from an energy absorbing material.
14. The device of claim 13, wherein components mounted within the frame are embedded within the energy absorbing material.
15. The device of claim 14, wherein the frame comprises a communicating channel between the microphone and the environment surrounding the device.
16. The device of claim 1, wherein the device is a mouth guard.
17. The device of claim 1, further comprising: a communication device, mounted to or within the frame, for transmitting data.
18. The device of claim 17, wherein the communication device transmits data wirelessly.
19. The device of claim 18, wherein the communication device comprises at least one light source.
20. The device of claim 17, wherein one or more of the sound data, the acceleration data, the processed sound data and the processed acceleration data are transmitted.
21. A system for monitoring the health of a person comprising: a first device according to claim 1; and a second device, separate from the first device, configured to receive data from the communication device of the first device, the second device comprising a second processor for processing the received data.
22. The system of claim 21, wherein the second processor is configured to determine the breathing data relating to the person's breathing based on the received data.
23. The system of claim 22, wherein the breathing data comprises breathing rate of the person and breathing volume of the person.
24. The system of claim 22, wherein the breathing data is time series data.
25. The system of claim 21, wherein the second processor is configured to determine the impact data relating to a physical impact experienced by the person based on the received data.
26. The system of claim 25, wherein the impact data comprises the amplitude of acceleration of the person and/or the direction of acceleration of the person.
27. The system of claim 25, wherein the impact data is time series data.
28. The system of claim 25, wherein the second processor is configured to compare the breathing data and the impact data.
29. The system of claim 21, wherein second processor is configured to determine the occurrence of abnormal breathing or the cessation of breathing caused by an impact to the person.
30. The system claim 21, wherein the communication device of the first device comprises at least one light source, the second device comprises an imaging system for obtaining image data corresponding to an image of the light source, and the second processor is configured to process the image data to determine the data transmitted by the at least one light source.
31. A method of monitoring the health of a person comprising: having the person wear the device of claim 1; processing the sound data measured by the device to determine breathing data relating to the person's breathing; and processing the acceleration data measured by the device to determine impact data relating to a physical impact experienced by the person.
Description
[0022] Further features and advantages of the present invention are described below by way of non-limiting example with reference to the accompanying drawings, in which:
[0023]
[0024]
[0025]
[0026]
[0027]
[0028] In the embodiment shown in
[0029]
[0030] The device shown in
[0031] A communication channel may be provided in the frame 2 between the microphone and the environment surrounding the device. For, example, the channel may be a small hole formed, e.g. drilled, in the frame 2. This allows the microphone to more easily pick up sound from the mouth of the wearer.
[0032] The microphone 11 of the device 1 measures sound within the mouth of the wearer of the device 1. The sound within the mouth of the wearer includes sound caused by the wearer's breath during both inhalation and exhalation. However, other sounds may also be picked up by the microphone 11. The data collected by the microphone 11 may be the amplitude of the sound within the mouth of the wearer over a period of time. The microphone 11 may continuously monitor the sound within the mouth of the wearer, e.g. sample the sound at regular intervals at a suitable sampling rate. The sound data may be suitably filtered to extract a signal representative of the breathing of the wearer. In other words, the filters may be used remove noise, e.g. external sound from the external environment of the wearer. The collected sound data may be stored in a memory, e.g. a memory of the processor 13.
[0033] Based on the sound data, the processor 13 may determine breathing data relating to the wearer's breathing. The breathing data may include the breathing rate of the wearer. For example, this can be determined based on a wavelet analysis and/or Fourier transform of the sound data and/or another suitable method. Alternatively, or additionally, the breathing data may include the breathing volume (e.g. an estimate of tidal volume) of the wearer, i.e. the volume of air inhaled and/or exhaled by the wearer. This may be determined based on the amplitude of the sound data corresponding to each breath and/or the duration of each breath. The breathing data may be in a form of a time series, which allows changes, e.g. in the breathing rate and breathing volume of the wearer, to be monitored over a period of time. The breathing data may be stored in a memory of the processor 13 for example.
[0034] The processor 13 may determine impact data relating to a physical impact experienced by the wearer based on the acceleration data collected by the accelerometer 12. The impact data determined by the processor 13 may include the amplitude of the acceleration and/or the direction of the acceleration. The acceleration data may be filtered to extract useful data. For example, accelerations below a minimum threshold force or in a particular direction may be filtered out, e.g. the low amplitude, regular accelerations caused by movements such as running may be filtered out to provide a useful signal. Therefore, the type of accelerations typically resulting from an impact can be detected more accurately. The impact data may be collected as time series data.
[0035] The processor 13 may determine when the wearer's breathing ceases or becomes abnormal. Abnormal breathing may include an abnormally high or low breathing rate, an abnormal, high or low breathing volume, or an irregular breathing rate or breathing volume. The processor 13 may also determine when the wearer stops breathing, i.e. when the wearer's breathing rate is substantially zero.
[0036] The processor 13 may compare the breathing data and the impact data to determine whether abnormal breathing or cessation of breathing correlates to a detected impact experienced by the wearer. If the processor 13 determines that abnormal breathing or cessation of breathing may have been caused by an impact experienced by the wearer, one or more actions may be taken automatically. For example, a third party may be notified of the determination, e.g. a team doctor. This may be by wireless communication from the communication device 14. Alternatively, the device 1 may include a light, e.g. an LED, which lights up in response to the processor 13 determining abnormal or cessation of breathing caused by an impact to the wearer.
[0037]
[0038] The external device 20 may include a processor 21 and a receiver 22 for receiving the data transmitted by the device 1, as shown in
[0039] In an example, data may be transmitted from the device 1 using at least one light source as the communication device 14. The light source may be an LED for example or an array of LEDs. Different LEDs may transmit the different types of data discussed above. Different LEDs may have different colours. The light source may be infrared or near infrared so as to be invisible to the human eye.
[0040] The external device 20 may comprise an imaging system (e.g. as a receiver 22) for obtaining image data corresponding to an image of the light source. The processor 21 of the second device 20 may be configured to process the image data to determine the data transmitted by the at least one light source.
[0041] The light source 20 may be switched on and off to transmit binary data. The data transmitted by the light source may be any kind of data (e.g. converted into a binary digital signal by the processor 13). Alternatively, the intensity light source may be modulated to provide more continuous, analogue data, as opposed to binary data.
[0042] In an example, the light source can be switched on in synchronisation with the inhalation and exhalation of the wearer and switched off in the transition points between inhalations and exhalations. In this way the light source can transmit breathing rate data.
[0043] The transmitted data may be detected by an imaging system (e.g. a video or still image camera) of the external device 20. The imaging system may obtain time series image data of the device 1 and the light source. The image data may then be processed by the processor 20 to determine the data transmitted from the device 1 from the image data.
[0044] The image processing may comprise a step of detecting a mouth area of the wearer of the device. A facial feature recognition algorithm may be used to determine a region of the image data that corresponds to a mouth of the wearer. The algorithm may be based on a Viola-Jones object detection method. The algorithm may determine a mouth area in a next frame of the image data based on a mouth area determined in a previous frame to improve determination accuracy. The algorithm may determine the mouth area based on the relative location of a number of possible mouth areas. For example, the algorithm may detect the mouth and the eyes as possible mouth regions. Therefore, the correct mouth region can be determined based on which possible mouth region is lowest in the image frame or the distance between the possible mouth regions (the distance between the eyes is usually smaller than the distance from an eye to the mouth). The algorithm may search for the presence of a light source (e.g. based on the colour of the light source) to determine the mouth region.
[0045] The image processing may comprise a step of filtering the image data (e.g. the determined mouth area) based on the colour of the light source. The filter may be implemented using a Kaiser Window and/or a Blackman Window, for example. The image processing may further comprise a step of thresholding the image data. For example, pixels below a predetermined threshold may be floored (e.g. assigned a value of zero) and pixels exceeding the threshold may be ceilinged.
[0046] The image processing may further comprise a step of determining whether the light source is on or off based on whether the number of non-zero pixels of the thresholded image data exceeds a predetermined threshold. If the number of non-zero pixels exceeds the threshold, then the light source may be determined to be on for that frame.
[0047] Processing each frame as above, can provide time-series data (e.g. breathing rate data). This time-series data can be processed further, as described above.
[0048] A light source (e.g. an LED of a different colour) may alternatively or additionally be used to transmit acceleration data obtained by the accelerometer. For example, the light source may be switched on (or off) when the wearer experiences an acceleration exceeding a particular threshold, e.g. indicating a forceful impact. The same image processing as described above may be used to determine time-series acceleration data. If different light sources are used for breathing rate data and acceleration data, the breathing rate data and the acceleration data can be compared.
[0049] Although the present invention has been described in terms of exemplary embodiments, it is not limited thereto. It should be appreciated that variations or modifications may be made to the embodiments described without departing from the scope of the present invention as defined by the claims.