CONTACTLESS BREATHING DETECTION METHOD AND SYSTEM THEREOF
20220031191 · 2022-02-03
Inventors
Cpc classification
A61B5/0077
HUMAN NECESSITIES
A61B5/0816
HUMAN NECESSITIES
International classification
A61B5/08
HUMAN NECESSITIES
A61B5/00
HUMAN NECESSITIES
Abstract
A contactless breathing detection method is for detecting a breathing rate of a subject. The contactless breathing detection method includes a photographing step, a capturing step, a calculating step, and a converting step. The photographing step is performed to provide a camera to photograph the subject to generate a facial image. The capturing step is performed to provide a processor module to capture the facial image to generate a plurality of feature points. The calculating step is performed to drive the processor module to calculate the feature points according to an optical flow algorithm to generate a plurality of breathing signals. The converting step is performed to drive the processor module to convert the breathing signals to generate a plurality of power spectrums, respectively. The processor module generates an index value by calculating the power spectrums, and the breathing rate is extrapolated from the index value.
Claims
1. A contactless breathing detection method for detecting a breathing rate of a subject, the contactless breathing detection method comprising: performing a photographing step to provide a camera to photograph the subject to generate a facial image; performing a capturing step to provide a processor module to capture the facial image to generate a plurality of feature points; performing a calculating step to drive the processor module to calculate the feature points according to an optical flow algorithm to generate a plurality of breathing signals; and performing a converting step to drive the processor module to convert the breathing signals to generate a plurality of power spectrums, respectively, wherein the processor module generates an index value by calculating the power spectrums, and the breathing rate is extrapolated from the index value.
2. The contactless breathing detection method of claim 1, wherein a number of the feature points is 7, and the feature points are a midpoint of an inner eye corner, a midpoint of an outer eye corner, a midpoint of an inner-outer right eye corner, a nose-root point, a nose-tip point, a nose-base point and a lower jaw point, respectively.
3. The contactless breathing detection method of claim 1, wherein the calculating step comprises: performing a tracking step to execute the optical flow algorithm through an optical flow unit and tracking the feature points to generate a mixed signal; and performing an analyzing step to process the mixed signal with an analyzing unit to generate the breathing signals.
4. The contactless breathing detection method of claim 3, wherein the optical flow unit comprises a displacement, a X-coordinate of each of the feature points, a Y-coordinate of each of the feature points, a time parameter and the mixed signal, the displacement is expressed as D.sub.i, and the X-coordinate is expressed as X.sub.Fi(t), the Y-coordinate is expressed as Y.sub.Fi, the time parameter is expressed as t, and the mixed signal is expressed as S and conforms to a following formula:
5. The contactless breathing detection method of claim 1, wherein the converting step comprises: performing a Fourier transform step to provide a Fourier transform unit to process the breathing signals to generate a plurality of frequency domain signals, respectively; and performing a power converting step to process the frequency domain signals through a power converting unit to generate the power spectrums.
6. The contactless breathing detection method of claim 5, wherein the power converting unit comprises a power, a real part, a variable, and an imaginary part, the power is expressed as P.sub.i, the real part is expressed as R.sub.i, the variable is expressed as u, and the imaginary part is expressed as I.sub.i and conforms to a following formula:
P.sub.i(u)=R.sub.i.sup.2(u)+I.sub.i.sup.2(u), i=1,2, . . . ,n.
7. The contactless breathing detection method of claim 5, wherein each of the frequency domain signals has a frequency, and the converting step further comprises: performing a filtering step to provide a filtering unit to filter out each of the frequency domain signals having the frequency between 0.15 Hz and 0.35 Hz.
8. A contactless breathing detection system for detecting a breathing rate of a subject, the contactless breathing detection system comprising: a camera photographing the subject to generate a facial image; and a processor module electrically connected to the camera and receiving the facial image, wherein the processor module comprises: a capturing sub-module capturing the facial image to generate a plurality of feature points; a calculating sub-module connected to the capturing sub-module and receiving the feature points, wherein the calculating sub-module calculates the feature points according to an optical flow algorithm to generate a plurality of breathing signals; and a converting sub-module connected to the calculating sub-module and receiving the breathing signals, wherein the converting sub-module converts the breathing signals to generate a plurality of power spectrums, respectively, the converting sub-module generates an index value by calculating the power spectrums, and the breathing rate is extrapolated from the index value.
9. The contactless breathing detection system of claim 8, wherein the calculating sub-module comprises an optical flow unit, the optical flow unit executes the optical flow algorithm and comprises a displacement, a X-coordinate of each of the feature points, a Y-coordinate of each of the feature points, a time parameter and the mixed signal, the displacement is expressed as D.sub.i, and the X-coordinate is expressed as X.sub.Fi(t), the Y-coordinate is expressed as Y.sub.Fi(t), the time parameter is expressed as t, and the mixed signal is expressed as S and conforms to a following formula:
10. The contactless breathing detection system of claim 8, wherein the converting sub-module comprises a power converting unit, the power converting unit comprises a power, a real part, a variable, and an imaginary part, the power is expressed as P.sub.i, the real part is expressed as R.sub.i, the variable is expressed as u, and the imaginary part is expressed as I.sub.i and conforms to a following formula:
P.sub.i(u)=R.sub.i.sup.2(u)+I.sub.i.sup.2(u), i=1,2, . . . ,n.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0007] The present disclosure can be more fully understood by reading the following detailed description of the embodiment, with reference made to the accompanying drawings as follows:
[0008]
[0009]
[0010]
[0011]
[0012]
[0013]
[0014]
DETAILED DESCRIPTION
[0015] The embodiment will be described with the drawings. For clarity, some practical details will be described below. However, it should be noted that the present disclosure should not be limited by the practical details, that is, in some embodiment, the practical details is unnecessary. In addition, for simplifying the drawings, some conventional structures and elements will be simply illustrated, and repeated elements may be represented by the same labels.
[0016] It will be understood that when an element (or device) is referred to as be “connected to” another element, it can be directly connected to the other element, or it can be indirectly connected to the other element, that is, intervening elements may be present. In contrast, when an element is referred to as be “directly connected to” another element, there are no intervening elements present. In addition, the terms first, second, third, etc. are used herein to describe various elements or components, these elements or components should not be limited by these terms. Consequently, a first element or component discussed below could be termed a second element or component.
[0017]
[0018] The camera 110 is for photographing a face in front view of the subject to generate a facial image 111. The processor module 120 is electrically connected to the camera 110 and receives the facial image 111. The processor module 120 includes a capturing sub-module 121, a calculating sub-module 122 and a converting sub-module 123. The capturing sub-module 121 captures the facial image 111 to generate a plurality of feature points 112. The calculating sub-module 122 is connected to the capturing sub-module 121 and receives the feature points 112. The calculating sub-module 122 calculates the feature points 112 according to an optical flow algorithm (e.g., Lucas-Kanade method) to generate a plurality of breathing signals 113. The converting sub-module 123 is connected to the calculating sub-module 122 and receives the breathing signals 113. The converting sub-module 123 converts the breathing signals 113 to generate a plurality of power spectrums, respectively. The converting sub-module 123 generates an index value by calculating the power spectrums, and the breathing rate BR is extrapolated from the index value.
[0019] Therefore, the present disclosure tracks the feature points 112 of the facial image 111 by the optical flow algorithm and converts the feature points 112 into the power spectrums, and then the breathing rate BR is extrapolated from the index value of a maximum peak of each of the power spectrums, so that the contactless breathing detection system 100 can take a contactless way to measure the breathing rate BR of the subject.
[0020] Please refer to
[0021] Please refer to
[0022] In
[0023] In
[0024] Therefore, the present disclosure captures the feature points 112 of the facial image 111 with the processor module 120 and tracks the variation of the feature points 112 with the optical flow algorithm. Finally, the present disclosure finds the index value 113d so as to estimate the breathing rate BR of the subject.
[0025] In detail, the contactless breathing detection method S100 can be divided into two stages: the first stage includes the image capture of the face and the capture of the feature points 112 (that is, the photographing step S110 and the capturing step S120); the second stage includes the calculation and the conversion of the breathing rate BR (that is, the calculating step S130 and the converting step S140).
[0026] Please refer to
[0027] In detail, in the tracking step S131, a total of seven facial feature points 112 are extracted, and n=7 in the formula (1). The optical flow unit 1221 finds the variation (i.e., the displacement D.sub.i) of the seven feature points 112 caused by the difference between the previous frame and the next frame in the time sequence with the tracking characteristics of the optical flow algorithm to obtain the mixed signal 112a. The mixed signal 112a can be a variety of different signals which include the signals of the body motion, the heart rate, and the breathing rate BR.
[0028] Successively, the analyzing step S132 processes the mixed signal 112a to generate the breathing signal 113 with the analysis unit 1222. Particularly, in order to further find out the frequency band matching the breathing rate BR, the analysis unit 1222 separates the mixed signal 112a through the ICA to obtain the seven separated breathing signals 113. In detail, because the human head (or face) contains many subtle movements, it is necessary to calculate the displacement D.sub.i to obtain the mixed signal 112a. Then, according to the principle of blind signal separation, the ICA is used for preliminary separation to decompose the independent signal sources hidden in the mixed signal 112a to select the signals matching the breathing rate BR.
[0029] Please refer to
[0030] Furthermore, each of the frequency domain signals 113a can have a corresponding frequency. The filtering step S142 includes providing the filtering unit 1232 to filter out each of the frequency domain signals 113b having the frequency between 0.15 Hz and 0.35 Hz. The filtering unit 1232 can be a Butterworth Filter which filters out the interesting section from the frequency domain signals 113a. Since the frequency of the breathing is between 0.15 Hz and 0.35 Hz, the filtering unit 1232 is for filtering out frequencies other than the range between 0.15 Hz and 0.35 Hz, and the remained frequency domain signals 113b is the interesting section.
[0031] Moreover, the power converting step S143 includes processing the frequency domain signals 113b through the power converting unit 1233 to generate the power spectrums 113c, respectively. In detail, according to Fourier analysis, anyone of the physical signals can be decomposed into a discrete or continuous spectrum. The total energy of the signal in a limited period of time is limited, so that the power spectrums 113c can be calculated by the above characteristic. The calculation of the power spectrums 113c is that after the signal is subjected to the FFT, the real square and imaginary square of the frequency domain signal 113b are added together to obtain the power spectrums 113c.
[0032] More detail, the converting sub-module 123 can include the power converting unit 1233, an index calculating unit 1234 and a breathing rate calculating unit 1235. The power converting unit 1233 includes a power, a real part, a variable, and an imaginary part, the power is expressed as P.sub.1, the real part is expressed as R.sub.i, the variable is expressed as u, and the imaginary part is expressed as I.sub.i and conforms to a following formula (2):
P.sub.i(u)=R.sub.i.sup.2(u)+I.sub.i.sup.2(u), i=1,2, . . . ,n (2).
[0033] Successively, in the converting step S140, a maximum power and an average power are extrapolated from the power spectrums 113c by the index calculating unit 1234. The maximum power minuses the average power, and the channel with the largest result is selected as the index value 113d for calculating the breathing rate BR, and then importing the index value 113d into the breathing rate calculating unit 1235, and using the formula (4) of the breathing rate BR to obtain the breathing rate BR of the final subject and conforming to the following formulas (3) and (4):
[0034] The index value 113d is expressed as I, the breathing rate BR is expressed as Breathing Rate, P.sub.i.sup.max is expressed as the maximum power, P.sub.i.sup.avg is expressed as the average power, argmax is expressed as a function, and the function argma can find the value of the variation when the formula reaches the maximum, a is expressed as the maximum power P.sub.i.sup.max and the average power P.sub.i.sup.avg of the aforementioned value, and u is expressed as a variation.
[0035] In summary, the present disclosure has the following advantages: First, the breathing rate of the subject can be measured in the contactless way. Second, there is no need to use a contact-type wearing device so as to reduce the cost of the detecting device.
[0036] Although the present disclosure has been described in considerable detail with reference to certain embodiments thereof, other embodiments are possible. Therefore, the spirit and scope of the appended claims should not be limited to the description of the embodiments contained herein.
[0037] It will be apparent to those skilled in the art that various modifications and variations can be made to the structure of the present disclosure without departing from the scope or spirit of the disclosure. In view of the foregoing, it is intended that the present disclosure cover modifications and variations of this disclosure provided they fall within the scope of the following claims.