SYSTEMS AND METHODS FOR EVALUATING RESPIRATORY FUNCTION USING A SMARTPHONE
20210369232 · 2021-12-02
Assignee
Inventors
Cpc classification
A61B8/5223
HUMAN NECESSITIES
A61B8/085
HUMAN NECESSITIES
A61B5/7475
HUMAN NECESSITIES
A61B5/08
HUMAN NECESSITIES
A61B5/7264
HUMAN NECESSITIES
A61B5/6898
HUMAN NECESSITIES
International classification
A61B5/00
HUMAN NECESSITIES
Abstract
A method of estimating a number of lung function indices of an individual. The method includes: transmitting an ultrasound signal toward a chest of the individual from a speaker of a smartphone while the individual is holding the smartphone in a hand of the individual; receiving in a microphone of the smartphone a reflected signal reflected from the chest of the individual in response to the ultrasound signal; extracting a number of features from the reflected signal; and providing the number of features to a neural network regression model, wherein the neural network regression model estimates the number of lung function indices based on the number of features and based on a non-linear correlation between chest wall motion and human lung function.
Claims
1. A method of estimating a number of lung function indices of an individual, comprising: transmitting an ultrasound signal toward a chest of the individual from a speaker of a smartphone while the individual is holding the smartphone in a hand of the individual; receiving in a microphone of the smartphone a reflected signal reflected from the chest of the individual in response to the ultrasound signal; extracting a number of features from the reflected signal; and providing the number of features to a neural network regression model, wherein the neural network regression model estimates the number of lung function indices based on the number of features and based on a non-linear correlation between chest wall motion and human lung function.
2. The method of claim 1, wherein the number of features are extracted from a number of exhalation portions identified from the reflected signal.
3. The method of claim 1, wherein extracting the number of features from the reflected signal comprises: converting the reflected signal to a number of I/Q traces on a complex plane to quantify an impact of random motion of the smartphone on the reflected signal; and adaptively removing the impact by correcting for distortion caused by the random motion and calibrating the reflected signal as if it was produced by a stationary device.
4. A non-transitory computer readable medium storing one or more programs, including instructions, which when executed by a processor, causes the processor to perform the method of claim 1.
5. An apparatus for estimating a number of lung function indices of an individual, the apparatus comprising: a speaker; a microphone; and a controller coupled to the speaker and the microphone and structured and configured for: transmitting an ultrasound signal toward a chest of the individual from the speaker while the individual is holding the apparatus in a hand of the individual; receiving in the microphone a reflected signal reflected from the chest of the individual in response to the ultrasound signal; extracting a number of features from the reflected signal; and providing the number of features to a neural network regression model implemented in the apparatus, wherein the neural network regression model estimates the number of lung function indices based on the number of features and a non-linear correlation between chest wall motion and human lung function.
6. The apparatus of claim 5, wherein the number of features are extracted from a number of exhalation portions identified from the reflected signal.
7. The apparatus of claim 5, wherein extracting the number of features from the reflected signal comprises: converting the reflected signal to a number of I/Q traces on a complex plane to quantify an impact of random motion of the smartphone on the reflected signal, and adaptively removing the impact by correcting for distortion caused by the random motion and calibrating the reflected signal as if it was produced by a stationary device.
8. The apparatus of claim 5, wherein the apparatus is a smartphone.
9. A method of calculating airway mechanics and/or detecting airway obstructions in an individual, comprising: directing an ultrasound signal generated by a smartphone into the airways of the individual through an interface attached to the smartphone and coupled to a speaker of the smartphone, wherein the interface has a mouthpiece structured to be received in a mouth of the individual; receiving in a microphone of the smartphone and through the interface a reflected signal reflected from the airways of the individual in response to the ultrasound signal; filtering and denoising the reflected signal to produce an adjusted reflected signal; and isolating a mixture of signals in the adjusted reflected signal and calculating from the isolated mixture of signals the airway mechanics and/or detecting from the isolated mixture of signals the airway obstructions.
10. A non-transitory computer readable medium storing one or more programs, including instructions, which when executed by a processor, causes the processor to perform the method of claim 9.
11. An apparatus for calculating airway mechanics and/or detecting airway obstructions in an individual, the apparatus comprising: a speaker; a microphone; an interface sized and structured to convey an ultrasound signal between the speaker and the microphone and a mouth of the individual; and a controller coupled to the speaker and the microphone and structured and configured for: transmitting an ultrasound signal from the speaker into the airways of the individual through the interface; receiving from the microphone a reflected signal from the interface reflected from the airways of the individual in response to the ultrasound signal; filtering and denoising the reflected signal to produce an adjusted reflected signal; and isolating a mixture of signals in the adjusted reflected signal and calculating from the isolated mixture of signals the airway mechanics and/or detecting from the isolated mixture of signals the airway obstructions.
12. The apparatus of claim 11, wherein the apparatus is a smartphone.
13. The apparatus of claim 12, wherein the interface comprises: an interface tube having a first end and a second end opposite the first end, the second end being sized and configured to be placed in the mouth of the individual; and an adaptor having a first portion selectively engaged on and around an end of the smartphone so as to encompass the speaker and the microphone and a second portion coupled to the first end of the interface tube.
14. The apparatus of claim 13, wherein the interface tube further includes a stopper portion structured to position the second end of the interface tube a predetermined distance from a front tooth of the individual.
15. The apparatus of claim 13, wherein the interface tube includes an opening defined therein between the first end and the second end thereof.
16. The apparatus of claim 11, wherein the adaptor comprises one or more of an auxiliary speaker and/or an auxiliary microphone in communication with the controller.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0022] A full understanding of the invention can be gained from the following description of the preferred embodiments when read in conjunction with the accompanying drawings in which:
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
[0039]
[0040]
[0041]
[0042]
[0043]
DETAILED DESCRIPTION OF EXEMPLARY EMBODIMENTS
[0044] As used herein, the singular form of “a”, “an”, and “the” include plural references unless the context clearly dictates otherwise.
[0045] As used herein, the statement that two or more parts or components are “coupled” shall mean that the parts are joined or operate together either directly or indirectly, i.e., through one or more intermediate parts or components, so long as a link occurs.
[0046] As used herein, “directly coupled” shall mean that two elements are directly in contact with each other.
[0047] As used herein, the term “number” shall mean one or an integer greater than one (i.e., a plurality).
[0048] As used herein, the term “connected” shall mean that elements are electrically connected such that signals may pass from one of the elements to the other.
[0049] As used herein, the term “controller” shall mean a programmable analog and/or digital device (including an associated memory part or portion) that can store, retrieve, execute and process data (e.g., software routines and/or information used by such routines), including, without limitation, a field programmable gate array (FPGA), a complex programmable logic device (CPLD), a programmable system on a chip (PSOC), an application specific integrated circuit (ASIC), a microprocessor, a microcontroller, a programmable logic controller, or any other suitable processing device or apparatus. The memory portion can be any one or more of a variety of types of internal and/or external storage media such as, without limitation, RAM, ROM, EPROM(s), EEPROM(s), FLASH, and the like that provide a storage register, i.e., a non-transitory machine readable medium, for data and program code storage such as in the fashion of an internal storage area of a computer, and can be volatile memory or nonvolatile memory.
[0050] As used herein, the terms “component” and “system” are intended to refer to a computer related entity, either hardware, a combination of hardware and software, software, or software in execution. For example, a component can be, but is not limited to being, a process running on a processor, a processor, an object, an executable, a thread of execution, a program, and/or a computer. By way of illustration, both an application running on a server and the server can be a component. One or more components can reside within a process and/or thread of execution, and a component can be localized on one computer and/or distributed between two or more computers. While certain ways of displaying information to users are shown and described with respect to certain figures or graphs as screenshots, those skilled in the relevant art will recognize that various other alternatives can be employed.
[0051] Directional phrases used herein, such as, for example and without limitation, top, bottom, left, right, upper, lower, front, back, and derivatives thereof, relate to the orientation of the elements shown in the drawings and are not limiting upon the claims unless expressly recited therein.
[0052] The disclosed concept will now be described, for purposes of explanation, in connection with numerous specific details in order to provide a thorough understanding of the subject invention. It will be evident, however, that the disclosed concept can be practiced without these specific details without departing from the spirit and scope of this innovation.
[0053] Embodiments of the present invention utilize an electronic apparatus 10 such as shown schematically in
[0054] Referring now to
[0055] In order to ensure accurate spirometry tests, the error of measuring the chest wall motion by apparatus 10 should be at most a few millimeters. Current acoustic sensing systems achieve such accuracy using an apparatus that is always stationary, when monitoring humans' breath in sleep or tracking small targets' motions (e.g., human fingers). In contrast to such approaches, embodiments of the present invention enable daily spirometry tests anytime and anywhere by a user (i.e., the patient) holding apparatus 10 in one or both of their hands. When being hand-held, the received ultrasound signal can be easily affected by random motions of the apparatus 10. As discussed in detail below, to identify the impact of these random motions, the received signal from microphone 16 is converted by controller 12 to I/Q traces on the complex plane, so as to quantify such impact as the geometric distortions of I/Q trace. Then, such impact is adaptively removed by correcting such distortions and calibrating the received signal as if it was produced with a stationary apparatus.
[0056] A major challenge of interpreting chest wall motion into lung function indices, on the other hand, is the heterogeneous human factors that may impair the data quality in spirometry tests. For example, patients may fail to follow the spirometry protocol when using apparatus 10 without guidance from clinicians. To eliminate the impact of these human factors, apparatus 10 avoids estimating lung function indices directly from chest wall motion. Instead, specific features from the chest wall motion are extracted and used as the input to a neural network regression model provided as a portion of, or as a separate component in communication with, controller 12. In particular, these motion features are extracted only from the exhalation stage of spirometry, and multiple criteria are applied to ensure that such exhalation stage can be appropriately identified. From the descriptions provided herein, it is to be appreciated that apparatus 10 provides a convenient yet cost-free tool for continuous tracking and evaluation of pulmonary diseases, which are crucial to patients' wellbeing. Apparatus 10 can also contribute to early-stage diagnosis of COVID-19 out of clinic, and help reduce the burden on public healthcare systems in a pandemic. Some key characteristics of apparatus 10 are as follows: [0057] Apparatus 10 is accurate. Its error of measuring chest wall motion is constrained within 4 mm. When being evaluated among healthy humans, its error of lung function monitoring is always lower than 3%. [0058] Apparatus 10 is adaptive. It can precisely remove the impact from various environmental and human factors, and allows flexible variations of its position (up to 20 cm) and orientation (up to 30° tilting) during spirometry tests without impairing the accuracy. It also well adapts to humans' body conditions, as well as different types of clothes being worn. [0059] Apparatus 10 is lightweight. It is contactless and does not require any extra hardware. When embodied as a smartphone, <15% of the smartphone's battery life is consumed with 1-hour of usage as described herein. [0060] Apparatus 10 is easy to use. The programming utilized by controller 12 may be implemented as an Android or iPhone app, with the spirometry tests fully automated, thus requiring minimum involvement from users.
[0061] Having thus provided a general overview of a first arrangement of the present invention, a more detailed description of the steps carried out by controller 12 in performing methods in accordance with embodiments of the present invention, as well as the overall functionality of apparatus 10 will now be provided. As an initial matter, the clinical background of spirometry will be introduced, and the correlation between lung function and chest wall positioning/movement that is utilized by the present invention will be explained.
[0062] As is known, spirometry measures how fast and how much air a patient can breathe out. Before a test starts, the patient exhales all air from their lungs. A spirometry test consists of two stages: 1) the patient first takes a full inhalation and then 2) exhales as hard and as fast as possible, until no more air can be exhaled. As shown in
[0067] In clinical practice, PEF measurements are highly variable, and clinicians mainly use the other three indices to evaluate lung function. To ensure accuracy, a patient usually completes multiple (e.g., 3-8) spirometry tests, and the maximum difference of FEV1 and FVC readings in these tests should be <0.15 L. In this way, measurement error of in-clinic spirometry is around 5%, which is the baseline to evaluate the performance of apparatus 10.
[0068] Since lung function greatly varies from one patient to another, the raw values of lung function indices are seldom used in clinic. Instead, clinicians usually categorize patients into subgroups according to their demographics (e.g., age, gender, race, etc.), and then convert the raw values of lung function indices into percentiles (% pred) over healthy people's data in the subgroup, provided by Global Lung Function Initiative (GLI). A percentile lower than 70% can indicate the presence of COPD or other lung diseases. Changes in the percentile of FEV1 or other indices can also indicate the presence of asthma or other lung diseases. Such percentiles are used by embodiment of the present invention as indicators to measure human lung function.
[0069] Correlation between lung function and chest wall motion has been clinically validated. Such correlation, as generally shown in
[0070] As shown in
[0071] Some examples of displays provided via touchscreen 18 are discussed further below in conjunction with
[0072] Since humans' chest wall displacement in spirometry is usually lower than 60 mm, the error of chest motion tracking should be at most 3-4 millimeters, so that the error of lung function estimation is within 5%. To achieve such accuracy, apparatus 10 measures the phase change between the transmitted and received ultrasound signal. Considering the transmitted signal as Acos(2πfd(t)/c, the phase of the received signal, after being reflected by the chest wall, is:
φ(t)=4πfd(t)/c,
where c is sound speed, f is the frequency of the transmitted signal, and d(t) is the distance between chest wall and apparatus 10 at time t. When the chest wall moves during a time period [t.sub.0, t.sub.1], its displacement during this period is:
Δd=d(t.sub.1)−d(t.sub.0)=−c/(4πf)(φ(t1)−φ(t0))
[0073] When the ultrasound signal's frequency ranges between 17 kHz and 24 kHz, a 2 mm displacement causes the signal path length to change by 4 mm and corresponds to a phase change between 0.4π and 0.56π, large enough to be detected. Such detectability also allows for the use of multiple signals with different frequencies in this range to further improve the tracking accuracy.
[0074] Since the chest wall's motion is measured as its relative displacement from apparatus 10, it could be easily affected by random movements of apparatus 10 (e.g., the patient cannot keep the hand-held apparatus 10 to be 100% stationary, the patient's body may unconsciously lean forward or backward when exhaling hard). To remove such irrelevant movements from the measured chest wall motion, an intuitive method in embodiments in which apparatus 10 is embodied as a smartphone is to measure the smartphone's movements using the built-in accelerometers of the smartphone. However, such approach is inaccurate due to the error accumulation when converting accelerometer readings into displacement via double integration. Instead, controller 12 investigates the abnormal characteristics from the measured chest wall motion itself. As shown in
[0075] Similarly, according to the spirometry protocol previously discussed, the patient's chest wall should be at the same position before and after a spirometry test, if the patient's body does not move during the test. The impact of body movements on the measured chest wall motion, as shown in
[0076] As shown in
[0080] Controller 12 quantifies such correlation using a neural network regression model, which is built with data from clinical studies. Since such clinical data from patients is usually low in volume and could hence result in overfitting when training the model, in one example embodiment a Bayesian regularized neural network, which has good capability of generalization that avoids overfitting, has been utilized. In a clinical study, patients performed spirometry tests using apparatus 10 and clinical-grade spirometers at the same time. As shown in
[0081] Both the accuracy and overhead of such inference depend on the complexity of the neural network. A neural network with more hidden layers and numbers of neurons improves the inference accuracy, but also increases its computation overhead. In an example embodiment of apparatus 10, three hidden layers are used empirically in the neural network associated with controller 12, we then balance between these two aspects by tuning the numbers of neurons in each layer. In general, it is ensured that the numbers of neurons in different hidden layers decrease as the network becomes deeper, and details of such tuning are described further below. Another challenge of such lung function estimation is the heterogeneous human factors, which may impair the data quality in spirometry tests and make it difficult to correctly identify the exhalation stage. Ideally, the measured chest wall motion, as shown in
[0082] Removal of irrelevant smartphone motions and patient body motions from the measured chest wall motion will now be discussed. As shown in
[0083] Segmentation: One intuitive method is to divide the I/Q trace into segments with the equal number of signal samples, but such approach is ineffective when chest motion is measured as a phase change: as shown in
[0084] Random signal noise may be produced by the hardware imperfection of smartphones or surrounding signal sources (e.g., spinning fans), and temporarily fluctuates the signal phase as shown in
[Î.sub.c,{circumflex over (Q)}.sub.c,{circumflex over (θ)}].sup.T=(H.sup.TH).sup.−1H.sup.TY
[0085] where,
[0086] N is the segment's number of samples, and θ=p.sub.c.sup.2−Î.sub.c.sup.2−{circumflex over (Q)}.sub.c.sup.2.
[0087] Each sample is then mapped to the arc individually. The effectiveness of such correction is evaluated, as shown in
[0088] The calibration builds on the fact that the patient's chest wall should be at the same position before and after a spirometry test if there is no extra body motion during the test, because the patient is supposed to have full inhalation and exhalation in spirometry. To verify this, a digital accelerometer was attached to the patients' upper body to measure their body motions in spirometry, and the results shown in
TABLE-US-00001 Algorithm 1 Calibration against body movements Input: D(t): the received ultrasound signal, t = 1 . . . T. ΔD = D.sub.after − D.sub.before: difference of chest wall position Output: D′(t): the calibrated chest wall motion, t = 1 . . . T 1: Initialize t.sub.start ← exhaling starts, t.sub.end ← exhaling ends: 2: Δd ← ΔD/(t.sub.end − t.sub.start) 3: for t.sub.start < t ≤ t.sub.end do
[0089] In some other cases, the patient's body may move both back and forth during exhalation. When such bidirectional body motion is small, controller 12 removes this motion through adaptive smoothing: it adapts the smoothing window (W) to the momentary chest motion speed (S) as W=(1−|S/Smax|).Math.fs, where Smax is the maximum chest motion speed and fs is the ultrasound signal's sampling rate. In this way, slower motion leads to a larger window that produces a smoother motion curve. Rapid motion results in a smaller window to avoid missing details in the motion pattern. Big bidirectional body motions, on the other hand, indicate that the patient does not follow the spirometry protocol and controller 12 will instead judge the corresponding spirometry test as invalid. Details of such judgment are described immediately below and an evaluation of the effectiveness of such body motion removal is discussed further below.
[0090] The exhalation stage in a spirometry test is indicated by a starting point (p.sub.start) and an ending plateau (P.sub.end). A valid p.sub.start should be a local minimum on the curve of chest wall displacement, and a valid P.sub.end should correspond to a period of sufficiently small chest motion. However in practice, as shown in
[0094] Based on such decision, the motion features S.sub.max, D.sub.1s and D.sub.max, such as shown and previously discussed in regard to
[0095]
[0096] The app having the displays shown in
[0097] Further evaluations of the example embodiment of the present invention previously discussed can be found in “SpiroSonic: Monitoring Human Lung Function via Acoustic Sensing on Commodity Smartphones” MobiCom 2020—The 26.sup.th Annual International Conference on Mobile Computing and Networking, 21-25 Sep. 2020. London, United Kingdom, the contents of which are incorporated herein by reference.
[0098] Referring now to
[0099] As noted above, smartphone speakers emit low-intensity ultrasonic signals at 17-25 kHz, which are imperceptible to the human ear and have negligible penetration into human tissues. The disclosed concept in this aspect directs the ultrasonic signal into the user's airway via interface 50, and more particularly via adaptor 52, which is coupled to apparatus 10 as shown in
[0100] In the claims, any reference signs placed between parentheses shall not be construed as limiting the claim. The word “comprising” or “including” does not exclude the presence of elements or steps other than those listed in a claim. In a device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The word “a” or “an” preceding an element does not exclude the presence of a plurality of such elements. In any device claim enumerating several means, several of these means may be embodied by one and the same item of hardware. The mere fact that certain elements are recited in mutually different dependent claims does not indicate that these elements cannot be used in combination.
[0101] Although the invention has been described in detail for the purpose of illustration based on what is currently considered to be the most practical and preferred embodiments, it is to be understood that such detail is solely for that purpose and that the invention is not limited to the disclosed embodiments, but, on the contrary, is intended to cover modifications and equivalent arrangements that are within the spirit and scope of the appended claims. For example, it is to be understood that the present invention contemplates that, to the extent possible, one or more features of any embodiment can be combined with one or more features of any other embodiment.