Apparatus and method for diagnosing sleep quality
11633150 · 2023-04-25
Assignee
- BEN GURION UNIVERSITY OF THE NEGEV RESEARCH AND DEVELOPMENT AUTHORITY (Beersheva, IL)
- Mor Research Applications Ltd. (Tel Aviv, IL)
Inventors
Cpc classification
G16H50/20
PHYSICS
A61B5/7264
HUMAN NECESSITIES
G16H10/60
PHYSICS
A61B5/0816
HUMAN NECESSITIES
G16H50/30
PHYSICS
A61B5/4809
HUMAN NECESSITIES
G16H15/00
PHYSICS
G16H50/00
PHYSICS
A61B5/7275
HUMAN NECESSITIES
A61B5/7278
HUMAN NECESSITIES
International classification
A61B5/00
HUMAN NECESSITIES
A61B5/08
HUMAN NECESSITIES
G16H10/60
PHYSICS
G16H15/00
PHYSICS
G16H50/00
PHYSICS
G16H50/20
PHYSICS
Abstract
A method of distinguishing sleep period states that a person experiences during a sleep period, the method comprising: using a non-contact microphone to acquire a sleep sound signal representing sounds made by a person during sleep; segmenting the sleep sound signals into epochs; generating a sleep sound feature vector for each epoch; providing a first model that gives a probability that a given sleep period state experienced by the person in a given epoch exhibits a given sleep sound feature vector; providing a second model that gives a probability that a first sleep period state associated with a first epoch transitions to a second sleep period state associated with a subsequent second epoch; and processing the feature vectors using the first and second models to determine a sleep period state of the person from a plurality of possible sleep period states for each of the epochs.
Claims
1. A contactless method for determining levels of one or more sleep disorder factors, the method comprising: using a non-contact microphone, acquiring a real-time sleep sound signal representing sounds made by a person during sleep, and conveying the signal to a processor; segmenting by the processor the sleep sound signals into a plurality of epochs; generating by the processor at least one sleep sound feature vector for each of the epochs in the plurality of epochs, wherein for each of the epochs, the generation of the at least one sleep sound feature vector for the respective epoch includes determining an autocorrelation function Rτ as a function of time displacement τ of sleep sounds that occurred during the respective epoch, and specifying that the at least one sleep sound feature vector for the respective epoch is equal to a value at a time displacemen τ=τ.sub.1 for which the autocorrelation function reaches a first maximum after a maximum of the autocorrelation function at τ=0; for each of the epochs in the plurality of epochs, determining by the processor a set of first probabilities using a first model and based on the at least one sleep sound feature vector generated for the respective epoch, said set of first probabilities indicating probabilities as to whether the respective epoch is associated with respective sleep period states from a plurality of predefined sleep period states; associating each epoch in the plurality of epochs with one of the sleep period states from the plurality of predefined sleep period states based on the set of first probabilities; determining by the processor a second probability for each set of two adjacent epochs in the plurality of epochs, the second probability indicating a probability of transitioning from the sleep period state associated with the preceding epoch in the respective set to the sleep period state associated with the succeeding epoch in the respective set; wherein the processor is configured to determine the sleep period state of each of the epochs in the plurality of epochs based on the set of first probabilities and the second probability, wherein the sleep period state of each of the epochs in the plurality of epochs is determined during the respective epoch; and utilizing said determined sleep period states, determining levels of sleep disorder factors selected from: total sleep time (TST)—a sum of the durations of sleep states in a sleep period; sleep latency (SL)—an elapsed time to falling asleep from a time of lying down to go to sleep; sleep efficiency (SE)—a ratio between TST and total time spent lying down to sleep during the sleep period; wake-time after sleep onset (WASO)—a sum of the durations of awake states during the sleep period; and an awakening index (AI)—equal to an average number of times per hour a person awakes from sleep during the sleep period; wherein the first model comprises a Gaussian mixture model (GMM), and the second probability for each set of two adjacent epochs in the plurality of epochs is determined using a second model, and the second model comprises a hidden Markov model (HMM).
2. The method according to claim 1, wherein the set of first probabilities includes a probability that the person is experiencing an awake state p(A) during the respective epoch and a probability p(S) that the person is experiencing a sleep state during the respective epoch.
3. The method according to claim 2, further comprising, for each epoch, determining a value for a classification metric, CM, based on p(A) and p(S) for the respective epoch.
4. The method according to claim 3, further comprising, for each epoch, determining that the person is experiencing an awake state or a sleep state during the respective epoch based on the respective value of the CM and a classifier threshold for the CM value.
5. The method according to claim 1, further comprising determining a sleep quality parameter (SQP) indicative of a quality of sleep for the person based on the determined sleep period state of the epochs.
6. The method according to claim 5, wherein the determination of the SQP comprises determining a value for each of a total sleep time (TST); sleep latency (SL); sleep efficiency (SE); wake-time after sleep onset (WASO); and/or an awakening index (AI).
7. The method according to claim 1, wherein the at least one sleep sound feature vector further comprises a value for a respiration rate intensity (RRI); a snore likelihood; and/or at least one lability feature.
8. The method according to claim 7, wherein a value for RRI is determined by determining a line tangent to a maximum of the autocorrelation function for a time displacement equal to zero and a first maximum of the autocorrelation function for a time displacement greater than zero.
9. The method according to claim 8, further comprising determining a value for an area factor based on an area between the tangent line and the autocorrelation function, and determining the value for RRI based on the area function and the magnitude of the first maximum.
10. The method according to claim 7, wherein a lability feature comprises a measure of respiration rate variability (RRV), variability of time delay (VOD) between a breath inhale and a breath exhale, variability in RRI, and/or snores duration.
11. The method according to claim 1, further comprising: for at least one of the epochs in the plurality of epochs, identifying a portion of the sleep sound signal having an energy greater than a threshold energy and duration greater than a minimum duration; determining a snore feature vector for the portion; determining a probability, p(snore), that the portion exhibits a snore and a probability, p(noise), that the portion exhibits noise rather than a snore based on the snore feature vector; determining a snore likelihood for the sound feature vector of the at least one epoch based on p(snore) and p(noise).
12. The method according to claim 11, wherein determining the snore likelihood comprises determining the snore likelihood to be equal to a snore likelihood score (SLS) that is a function of p(snore) and p(noise).
13. The method according to claim 12, further comprising determining the SLS, wherein determining SLS includes determining an event score equal to (log p(snore)−log p(noise) for the portion.
14. The method according to claim 13, wherein determining the SLS comprises determining SLS to be equal to a maximum of event scores for portions of the sleep sound signal in the epoch.
15. Apparatus for determining levels of one or more sleep disorder factors, the apparatus comprising: at least one non-contact microphone configured to acquire, in real-time, a sleep sound signal representing sounds made by a person during sleep; and a processor having an executable instruction set configured to: segment the sleep sound signals into a plurality of epochs; generate at least one sleep sound feature vector for each epoch in the plurality of epochs, wherein for each of the epochs, the generating of the at least one sleep sound feature vector for the respective epoch includes determining an autocorrelation function Rτ as a function of time displacement τ of sleep sounds that occurred during the respective epoch and specifying that the at least one sleep sound feature vector for the respective epoch is equal to a value at a time displacement τ=τ.sub.1 for which the autocorrelation function reaches a first maximum after a maximum of the autocorrelation function at τ=0; for each of the epochs in the plurality of epochs, determine a set of first probabilities using a first model and based on the at least one sleep sound feature vector generated for the respective epoch, the set of first probabilities indicating probabilities as to whether the respective epoch is associated with respective sleep period states from a plurality of predefined sleep period states; associate each epoch in the plurality of epochs with one of the sleep period states from the plurality of predefined sleep period states based on the set of first probabilities; determine a second probability for each set of two adjacent epochs in the plurality of epochs, the second probability indicating a probability of transitioning from the sleep period state associated with the preceding epoch in the respective set to the sleep period state associated with the succeeding epoch in the respective set; determine the sleep period state of each of the epochs in the plurality of epochs based on the set of first probabilities and the second probability, wherein the sleep period state of each of the epochs in the plurality of epochs is determined during the respective epoch; and utilize said determined sleep period states, determining levels of sleep disorder factors selected from: total sleep time (TST)—a sum of the durations of sleep states in a sleep period; sleep latency (SL)—an elapsed time to falling asleep from a time of lying down to go to sleep; sleep efficiency (SE)—a ratio between TST and total time spent lying down to sleep during the sleep period; wake-time after sleep onset (WASO)—a sum of the durations of awake states during the sleep period; and an awakening index (AI)—equal to an average number of times per hour a person awakes from sleep during the sleep period; wherein the first model comprises a Gaussian mixture model (GMM), and the second probability for each set of two adjacent epochs in the plurality of epochs is determined using a second model, and the second model comprises a hidden Markov model (HMM).
16. Apparatus according to claim 15, wherein the at least one non-contact microphone comprises a plurality of non-contact microphones.
17. Apparatus according to claim 15, wherein at least a portion of the apparatus is housed in a smartphone, PC, laptop, and/or a work book.
18. Apparatus according to claim 15, wherein the processor is further configured to: for at least one epoch identify a portion of the sleep sound signal having an energy greater than a threshold energy and duration greater than a minimum duration; determining a snore feature vector for the portion; determine a probability, p(snore), that the portion exhibits a snore and a probability, p(noise), that the portion exhibits noise rather than a snore based on the snore feature vector; determine a snore likelihood for the sound feature vector of the at least one epoch based on p(snore) and p(noise).
Description
BRIEF DESCRIPTION OF FIGURES
(1) Non-limiting examples of embodiments of the invention are described below with reference to figures attached hereto that are listed following this paragraph. Identical structures, elements or parts that appear in more than one figure are generally labeled with a same numeral in all the figures in which they appear. A label labeling an icon representing a given feature of an embodiment of the invention in a figure may be used to reference the given feature. Dimensions of components and features shown in the figures are chosen for convenience and clarity of presentation and are not necessarily shown to scale.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10)
(11) Microphone 22 registers sleep sounds made by person 100 during the person's nighttime sleep period sleep and sounds that are not made by the person that reach the microphone during the sleep period. Sounds that are made by the person comprise for example, breathing sounds, snoring sounds, coughing and voice sounds, and motion sounds that are produced by motion of the person, such as bed creaking and blanket rustling sounds. Sounds that are not made by the person may comprise street sounds and sounds originating in other rooms of the person's house that reach the bedroom, and sounds made by appliances, such as a whining sound made by an overhead fan 106 in bedroom 102. Sounds not made by the person may also include sounds made by another person (not shown) in the bedroom.
(12) For convenience of presentation, sounds that are registered by microphone 22 that are not sleep sounds made by person 100 are referred to as background noise, or noise. Microphone 22 transmits the sounds that it registers as signals schematically represented by a waveform 23, also referred to as signal 23, optionally in real time directly to computer system 30 and/or to an interim memory for later transmittal to the computer system. Signal 23 generally comprises sleep sound signals mixed with varying amounts of noise signals, also referred to simply as noise, responsive to background noise.
(13) Computer system 30 processes signal 23 using a method in accordance with an embodiment of the invention discussed below, to identify different sleep period states that person 100 exhibits during sleep and process characteristics of the sleep period states to provide a set of SQPs usable to indicate quality of sleep that person 100 experiences. Computer system 30 may comprise a memory 32 for storing signal 23 that it receives from microphone 22 and is optionally configured having a computer executable instruction set that may have a preprocessor 34, a feature extractor 36, a sleep period state classifier 38, and a SQP generator 40.
(14) Preprocessor 34 processes signal 23 stored in memory 32 to increase signal to noise and reduce adulteration of sleep sounds such as snoring and breathing sounds, and optionally motion sounds, in signal 23 by noise. Feature extractor 36 processes preprocessed signal 23 in accordance with an embodiment of the invention to determine sleep sound features and generate sleep sound feature vectors that may be used to distinguish states of sleep from awake states during the nighttime sleep of person 100. In an embodiment of the invention, the sleep sound features are determined for and define a sleep sound feature vector for each of a series of sequential time segments of signal 23. The sleep sound feature vector for a given time segment may comprise measures of respiratory rhythm period (RRP), respiratory rhythm intensity (RRI), and snore likelihood score (SLS) determined for the segment.
(15) Sleep period state classifier 38 operates on the sleep sound feature vectors determined for the segments to determine for each of the segments whether person 100 is in a sleep state or an awake state. Sleep period state classifier 38 is configured to make the determinations using models that provide transition probabilities between sleep and awake states and a probability that a given sleep sound feature vector is generated by a given sleep or awake state. In an embodiment of the invention, sleep period state classifier 38 is trained on a training set of sleep period time segments for which sleep period states are determined in accordance with a suitable gold standard procedure, such as a PSG study in a sleep laboratory involving human classification of sleep period states. SQP generator 40 processes data that characterizes the sleep period states determined for segments of the nighttime sleep period of person 100 to provide SQPs that may be used to provide an assessment of the sleep quality of the sleep period.
(16) Computer system 30 may be comprised in or comprise any real or virtual computer system or communication device having access to suitable computer resources. For example, the computer system may comprise or be comprised in a smartphone, PC, a laptop, and/or a work book. Computer system 30 may be a distributed system having components and executable instruction sets located in different servers, and may be partially or completely based on access to servers via the internet, that is partially or completely “cloud based”. For example, memory 32 may be located close to microphone 22 and directly coupled to the microphone by a wire or wireless communication channel to receive and store sleep sound signal 23. Preprocessor 34, feature extractor 36, sleep period state classifier 38, and SQP generator 40 may be connected to memory 32 and each other by the internet and reside and function in different internet servers. And whereas microphone 22 is shown separate from computer system 30 it may be comprised as a component in apparatus, for example a smartphone, housing at least a portion of computer system 30.
(17) Aspects of SleepDetective 20 and configuration and functioning of preprocessor 34, feature extractor 36, sleep period state classifier 38, and SQP generator 40, are discussed below with reference to a flow diagram 200 shown in
(18) In a block 202 SleepDetective 20 is turned on and microphone 22 registers sounds made in or reaching room 102 and transmits, optionally analog, electronic signals that form sleep sound signal 23 to computer system 30. The computer system may convert sleep sound signal 23 from an analog signal to a digital signal and optionally stores the digital sleep sound signal in memory 32. Hereinafter, unless otherwise specified, reference to sleep sound signal 23 is assumed to reference the digital form of the sleep sound signal. Sleep sound signal 23 includes background sounds, such as background sounds noted above, and respiratory sounds made by person 100 during a sleep period. The sleep sound signal may include electromagnetic interference from power lines and appliances in a neighborhood of SleepDetective 20. A sleep period, for which an associated sleep sound signal 23 is acquired, may have different durations, and may of course have duration of a nominal full night's sleep of 6-8 hours.
(19) In a block 204, preprocessor 34 processes signal 23 stored in memory 32 to increase signal to noise ratio (SNR) of signal 23 and reduce vitiation of breathing and snore sounds by noise to provide a SNR enhanced signal 23-SNR. Optionally, preprocessor 34 employs a noise reduction algorithm that operates to reduce noise due to stationary processes and emphasize non-stationary events such as snores and inhale breaths to generate signal 23-SNR. In an embodiment of the invention, an algorithm based on a Wiener-filter and a decision-directed approach such as proposed by Scalart P, Filho J V (1996); “Speech Enhancement Based on A Priori Signal to Noise Estimation”; Conf Proc IEEE International Conference on Acoustics, Speech, and Signal Processing 2: 629-632.
(20) In a block 206 feature extractor 36 optionally processes signal 23-SNR to segment the signal into a sequence of time segments and generate an energy signal e(n) for each segment, where n refers to a sequential integer index labeling the segments. In an embodiment of the invention, the energy signal e(n) for a given n-th segment is equal or proportional to a sum of squared amplitudes of signal 23-SNR in the segment, or an average of the squared amplitudes in the segment, weighted by a suitable window function. Optionally, the segments are 60 ms (milliseconds) long with an overlap of about 75% providing an energy value at 15 ms time intervals of the energy signal e(n), and the window function is a Gaussian window function. In an embodiment of the invention, the energy values are provided and stored in memory 32 in units of dB (decibels).
(21) In a block 208 feature extractor 36 processes the energy signal e(n) and/or signal 23-SNR to determine values of sleep sound features for a sleep period vector that may be used to distinguish sleep states from awake states. In an embodiment, the sleep sound features comprise a respiratory rhythm period, RRP, a respiratory rhythm intensity, RRI, and a snore likelihood score (SLS) for the given period of time.
(22) To determine a RRP, feature extractor 36 segments the energy signal e(n) into a sequence of time segments and for each time segment processes the energy signal e(n) in the segment optionally to determine an autocorrelation function R(τ) for the segment as a function of time displacement τ. Autocorrelation function R(τ) is used to determine periodicity of the sleep sounds for the segment and a RRP for person 100 during the segment. In an embodiment of the invention, RRP is determined to be equal to a value at a time displacement τ=τ.sub.1 for which R(τ) reaches a first maximum after a maximum of the autocorrelation function at τ=0. It is noted that whereas extractor 36 is described as determining RRP using an autocorrelation function, an embodiment of the invention is not limited to autocorrelating e(n) to determine RRP, and any of various other methods such as a fast Fourier transform (FFT) may be used to determine RRP.
(23) By way of numerical example, in an embodiment of the invention, feature extractor 36 segments energy signal e(n) into 24 s (second) time segments overlapping by 19 seconds and τ.sub.1 is a time displacement τ between 1 sec and 10 sec at which R(τ) peaks.
(24) In an embodiment of the invention, RRI is determined to be equal to a value of R(τ.sub.1) times an area factor “AF”, in symbols RRI=R(τ.sub.1)AF. Area factor AF may be determined responsive to an area A between peaks 131 and 132 of R(τ) at −τ=0 and τ.sub.1 respectively and a straight line tangent to the peaks.
(25)
(26) To an extent that RRI is larger, the RRP with which it is associated is a more dominant characteristic of the time dependence of the energy function e(n), and e(n) is closer to resembling a harmonic function with frequency 1/RRP.
(27) Feature extractor 36 generates a snore likelihood score SLS as a measure of snore likelihood. To generate the SLS, feature extractor 36 optionally processes each time segment into which energy signal e(n) is segmented and corresponding time segment of signal 23-SNR to determine if the time segment of e(n) (and corresponding time segment of 23-SNR) contains an audio event that is a candidate for identity as a snoring sound or snore. Hereinafter reference to the time segment of e(n) may be considered to include reference to the corresponding time segment of 23-SNR. Any of various snore detection algorithms may be used to determine if a given time segment of energy signal e(n) exhibits an audio event that may be a candidate for being a snoring sound. Optionally, the time segments used to identify snore candidate audio events have a same duration as the time segments used to determine RRP and RRI.
(28) In an embodiment of the invention, feature extractor 36 identifies a portion of a time segment of energy signal e(n) as a snore candidate audio event, if the portion exhibits energy greater than a suitable threshold energy E.sub.th and has a duration, τ.sub.d, greater than a suitable minimum duration τ.sub.dmin. For each snore candidate audio event, feature extractor 36 processes e(n) and/or 23-SNR to generate a snore feature set (optionally referred to as a snore feature vector) that may be used to determine whether to classify the audio event as a snore. Optionally, feature extractor 36 uses a snore model, represented by λ.sub.S to determine a probability that a snore candidate audio event is a snore, and a noise model, represented by λ.sub.N to determine a probability that a snore candidate audio event is noise. If x.sub.i represents a snore feature vector for an i-th snore candidate audio event that occurs at a time t.sub.i, a probability that the event is a snore may be written p(x.sub.i|λ.sub.S) and the probability that the event is noise may be written p(x.sub.i|λ.sub.N).
(29) In an embodiment of the invention, feature extractor 36, determines a value for SLS for each of a sequence of snore detection time segments of e(n) having duration, τ.sub.SLS, and segment overlap (τ.sub.SLS−Δτ.sub.SLS) Feature extractor 36 therefore provides value of SLS for energy signal e(n) at Δτ.sub.SLS time intervals, that is, a resolution of Δτ.sub.SLS seconds.
(30) In an embodiment of the invention, feature extractor 36 determines an event score, s(x.sub.i), responsive to p(x.sub.i|λ.sub.S) and p(x.sub.i|λ.sub.N) for a snore candidate audio event, and a value for SLS responsive to s(x.sub.i). Optionally, the event score s(x.sub.i) is a function of a ratio of p(x.sub.i|λ.sub.S)/p(x.sub.i|λ.sub.N). In an embodiment s(x.sub.i) is determined in accordance with an expression,
s(x.sub.i)=log p(x.sub.i|λ.sub.S)−log p(x.sub.i|λ.sub.N)
and a value for SLS for a given snore detection time segment is determined equal to a maximum of event scores s(x.sub.i) for snore candidate audio events that occur at corresponding times q during the snore detection time segment. In symbols,
SLS=max{s(x.sub.i):t.sub.iϵτSLS}.
In an embodiment of the invention, λ.sub.S and λ.sub.N may be a Gaussian mixture models or an Adaboost classifier, τ.sub.SLS has duration of 60 seconds and Δτ.sub.SLS a duration of 5 seconds.
(31) In a block 210 sleep state classifier 38 uses sleep sound features RRP, RRI and SLS determined by feature extractor 36 to determine sleep states for each of a sequence of time segments of duration T.sub.S, hereinafter also referred to for convenience as “epochs”, during person 100's nighttime sleep. Optionally, the first epoch of the sequence occurs at a time t.sub.o substantially at a time at which person 100 lies down to go to sleep and a last epoch in the sequence occurs at a time t.sub.J substantially at a time at which the person awakes and rises from sleep. If any of sleep sound features RRP, RRI and SLS were determined by feature extractor 36 for time periods having duration different from T.sub.S, sleep state classifier 38 averages or otherwise appropriately processes the sleep sound features to provide sleep sound features that correspond to the durations of the epochs.
(32) Let a sleep sound feature vector having values for RRP, RRI and SLS for a given epoch in the sequence at a time t be referred to as an “epoch feature vector” and be represented by X.sub.E(t). Let a sleep period state which person 100 experiences during an epoch at time t be represented by ST(t) and be referred to as an epoch state. In an embodiment of the invention, sleep state classifier 38 processes epoch feature vectors X.sub.E(t.sub.o), X.sub.E(t.sub.1) . . . X.sub.E(t.sub.J) for the sequence of epochs at times t.sub.J, 0≤j≤J, of the nighttime sleep period of person 100 using an optionally second order GMM and an optionally two state HMM to determine whether an epoch state “ST(t.sub.j)” for an epoch at time t.sub.j is a sleep state “S” or an awake state “A”.
(33) For a given epoch feature vector X.sub.E, the GMM provides a probability that X.sub.E is generated by a sleep state “S” or an awake state “A”. If the parameters, that define the GMM are represented by {circumflex over (λ)}, the probability of a given X.sub.E being generated by a sleep state S may be written p(X.sub.E{circumflex over (λ)}, S), and the probability that X.sub.E is generated by an awake state A is given by p(X.sub.E|{circumflex over (λ)}, A). The GMM parameters represented by {circumflex over (λ)} include an average μ and standard deviation σ for each of the components RRP, RRI and SLS of the vectors X.sub.E and a correlation matrix Σ for the components.
(34) The HMM provides a transition matrix that provides a probability that a sleep state S or awake state A for an epoch at time t.sub.j, remains the same for the next epoch at time t.sub.j+1 or transitions to an awake state A or a sleep state S respectively. If the transition matrix for the two state HMM is represented by “TM”, the transition probabilities may be represented by T(S.fwdarw.S), T(A.fwdarw.A), T(S.fwdarw.A), and T(A.fwdarw.S), where the arguments indicate the transitions to which the probabilities refer.
(35) The parameters GMM represented by {circumflex over (λ)} and the transition probabilities in the HMM matrix are determined in a training procedure using a training set of epochs for which sleep and awake states have been determined using an appropriate gold standard such a PSG and optionally human observation and discrimination.
(36) In terms of the GMM and HMM, a probability of person 100 being in an epoch state ST(t.sub.j) for an epoch at time t.sub.j and SleepDetective 20 registering an epoch vector X.sub.S(t.sub.j) if person 100 is in an epoch state ST(t.sub.j−1) at time t.sub.j−1 may be given by an expression,
p(X.sub.S(t.sub.j)|{circumflex over (λ)},ST(t.sub.j)T(ST(t.sub.j−1).fwdarw.ST(t.sub.j)).
(37) Given the sequence of J+1 epoch feature vectors X.sub.E(t.sub.o), X.sub.E(t.sub.1) . . . X.sub.E(t.sub.J) determined by SleepDetective 20 for person 100, a probability P(J) that the sequence was generated by a corresponding sequence of epoch states ST(t.sub.o), ST(t.sub.1) . . . ST(t.sub.J) may be expressed as,
P(J)=p.sub.oΠ.sub.1.sup.Jp(X.sub.S(t.sub.j)|{circumflex over (λ)},ST(t.sub.j))T(ST(t.sub.j−1).fwdarw.ST(t.sub.i)),
where the probability p.sub.o=p(X.sub.S(t.sub.o)|{circumflex over (λ)}, ST(t.sub.o)) of a first state in the sequence is considered to have a known value.
(38) In an embodiment of the invention, sleep state classifier 38 determines a sequence, hereinafter referred to as a “most probable sequence (MPS)” of epoch states ST(t.sub.o)*, ST(t.sub.1)* . . . ST(t.sub.J)* that maximizes P(J), optionally using a Viterbi algorithm. Sleep state classifier 38 may use the MPS and a probability that the MPS determines for an epoch state being a sleep state or an awake state to calculate a sleep state classification metric (CM) for the epoch that is advantageous in discriminating sleep states from awake states. If p(t.sub.j,A) is a probability provided by the MPS that person 100 is in an awake state during the epoch at a time t.sub.j and p(t.sub.j,S) is a probability that the person is in a sleep state, the classification metric CM(t.sub.j) be determined by an expression,
CM(t.sub.j)=α log [p(t.sub.j,A)/p(t.sub.j,S)],
where α is a normalizing constant and log may be the natural logarithm, that is the logarithm to the base e. In an embodiment, sleep period state classifier 38 determines whether person 100 is in an awake state if CM(t.sub.j) is less than a classifier threshold, CT, and in a sleep state if CM(t.sub.j) is greater than the CT threshold.
(39)
(40) Data used to provide graph 300 and the CM(t.sub.j) curve 302 shown in the graph was acquired in an experiment conducted with a real person by a SleepDetective in accordance with an embodiment of the invention similar to SleepDetective 20 during a nighttime sleep period of the person. The data for graph 300 was acquired simultaneously with control data acquired using PSG apparatus. The control data was used to distinguish awake states and various sleep states of the person during the nighttime sleep period. Among the sleep states distinguished by the control data are REM sleep and NREM sleep states S1, S2, S3 and S4.
(41) Analysis of the data and curves shown in graphs 300 and 320, and similar data acquired for sleep periods of other people indicate that a SleepDetective similar to SleepDetective 20 distinguishes sleep states and awake states with about 82% accuracy.
(42) In a block 212 of algorithm 200 sleep and awake states determined by sleep period state classifier 38 responsive to the classification metric CM(t.sub.j) and classification threshold CT are used by SQP generator 40 to calculate values for at least one of various SQPs that may be used to indicate quality of sleep for person 100. By way of example, an SQP that may be used to indicate a person's quality of may be: total sleep time (TST)—a sum of the durations of sleep states in a sleep period; sleep latency (SL)—an elapsed time to falling asleep from a time of lying down to go to sleep; sleep efficiency (SE)—a ratio between TST and total time spent lying down to sleep during the sleep period; wake-time after sleep onset (WASO)—a sum of the durations of awake states during the sleep period; and an awakening index (AI)—equal to an average number of times per hour a person awakes from sleep during the sleep period.
(43) The following SleepDetective SQP table shows values and standard deviations for the SQPs listed above that were acquired for sleep periods of 95 people by a PSG apparatus and a SleepDetective in accordance with an embodiment of the invention similar to SleepDetective 20.
(44) TABLE-US-00001 SleepDetective SQP SQP PSG SleepDetective SL (min) 64.3 ± 69.0 54.8 ± 59.2 SE (%) 65 ± 13 69 ± 16 TST (min) 290 ± 58 309 ± 68 WASO (min) 43 ± 31 52 ± 54 AI (e/hr) 4.7 ± 3.3 5.3 ± 5.1
(45) The SleepDetective SQP table shows that values for SPQs acquired by the SleepDetective in accordance with an embodiment of the invention and the PSG apparatus are in substantial agreement and are well within standard deviations of each other.
(46) Whereas in the above example, an HMM model was configured having only two sleep period states, an awake state and a sleep state, an embodiment of the invention is not limited to distinguishing two states one of which is a sleep state. For example, in an embodiment of the invention, sleep period feature vectors in accordance with an embodiment of the invention may be used to distinguish REM sleep states and NREM sleep states as well as awake states. Optionally, the sleep vectors used to distinguish REM and NREM sleep states include at least one feature, a “lability feature”, that that provides a measure of lability of activity of a person during a sleep period. The at least one feature may comprise a feature or any combination of features chosen from the group of features comprising a measure of respiration rate variability (RRV), variability of time delay (VOD) between a breath inhale and a breath exhale, variability in RRI, and snores duration.
(47)
(48) In the description and claims of the present application, each of the verbs, “comprise” “include” and “have”, and conjugates thereof, are used to indicate that the object or objects of the verb are not necessarily a complete listing of components, elements or parts of the subject or subjects of the verb.
(49) Descriptions of embodiments of the invention in the present application are provided by way of example and are not intended to limit the scope of the invention. The described embodiments comprise different features, not all of which are required in all embodiments of the invention. Some embodiments utilize only some of the features or possible combinations of the features. Variations of embodiments of the invention that are described, and embodiments of the invention comprising different combinations of features noted in the described embodiments, will occur to persons of the art. The scope of the invention is limited only by the claims.