SYSTEMS AND METHODS FOR IDENTIFYING FETAL MOVEMENTS IN AN AUDIO/VISUAL DATA FEED AND USING THE SAME TO ASSESS FETAL WELL-BEING
20170360378 · 2017-12-21
Inventors
Cpc classification
G16H50/20
PHYSICS
A61B5/7271
HUMAN NECESSITIES
International classification
A61B5/00
HUMAN NECESSITIES
A61B5/11
HUMAN NECESSITIES
Abstract
There is provided a system and methods for quantitatively assessing fetal well-being based on observing fetal movement activity in an audio/visual data feed. The system includes a method that detects and quantifies fetal movements in an audio/visual data feed by audio/visual-processing motion estimation techniques. The system also includes a method that captures metrics relating to fetal movements sensed by the mother or other independent party, whereby termed “maternal perception”. The system further includes a method that cross validates the fetal movement detected by the system with fetal movement sensed by “maternal perception”. The system further includes a method that generates output to summarize fetal movement activity over a recorded time period. This output may be reviewed by a third party to assist in determining if further intervention is needed.
Claims
1. A system, comprising: a recording device configured to obtain audio and/or visual data when directed toward a womb of a pregnant woman and further configured to transmit the audio and/or visual data; a motion processing engine configured to receive the audio and/or visual data, detect motion within the audio and/or visual data, and generate motion data based upon the motion detected within the audio and/or visual data; an analytics engine configured to receive the motion data from the motion processing engine, analyze the motion data to determine of the motion data corresponds to a one or more of fetal motion, maternal motion, or other motion, and generate analyzed motion data corresponding to the one or more of fetal motion, maternal motion, or other motion; and a data depot configured to receive the analyzed motion data from the analytics engine, store the same as stored data, and transmit the stored data.
2. The system of claim 1, wherein the analytics engine is further configured to receive the audio and/or visual data from the recording device.
3. The system of claim 1, wherein the data depot is further configured to receive the audio and/or visual data from the recording device and to store the same as stored data.
4. The system of claim 1, further comprising: a user interaction module configured to receive analyzed motion data from the analytics engine, to receive maternal perception input data synchronized with the audio and/or visual data, and to display at least one of the analyzed motion data and/or the maternal perception input data.
5. The system of claim 4, wherein the user interaction module is an input/output device.
6. The system of claim 4, further comprising: a validate component configured to receive the stored data from the data depot, compare the stored data with the maternal perception input data to generate validated data, and to transmit the validated data to the data depot to be stored as additional stored data.
7. The system of claim 1, configured for operation upon a single device, the single device comprising: an input/output interface; a processor/memory storage/network interface; and the recording device.
8. The system of claim 7, wherein the single device is selected from the group consisting of a smartphone, a tablet computer, and a laptop computer.
9. The system of claim 1, configured for operation upon a first device and a second device, the first device comprising an input/output interface and a processor/memory storage/network interface, and the second device comprising the recording device, wherein the first device is configured to receive the audio and/or visual data from the second device.
10. The system of claim 9, wherein the first device is selected from the group consisting of a smartphone, a tablet computer, and a laptop computer.
11. The system of claim 1, configured for operation upon a first device, a second device, and a third device, the first device comprising an input/output interface, the second device comprising the recording device, and the third device comprising a processor/memory storage/network interface.
12. The system of claim 11, wherein the third device is configured to receive the audio and/or visual data from the second device, store the audio and/or visual data as the stored data, and transmit the stored data to the first device.
13. The system of claim 11, wherein the first device comprises a smartphone or a tablet, and wherein the third device comprises a laptop computer or a desktop computer.
14. The system of claim 1, wherein the stored data is indicative of fetal well-being.
15. The system of claim 1, wherein the stored data is indicative of a diagnosis of a fetal condition.
16. A system, comprising: a recording device configured to obtain audio and/or visual data when directed toward a womb of a pregnant woman and further configured to transmit the audio and/or visual data; a motion processing engine configured to receive the audio and/or visual data, detect motion within the audio and/or visual data, and generate motion data based upon the motion detected within the audio and/or visual data; an analytics engine configured to receive the motion data from the motion processing engine, analyze the motion data to determine of the motion data corresponds to a one or more of fetal motion, maternal motion, or other motion, and generate analyzed motion data corresponding to the one or more of fetal motion, maternal motion, or other motion; a data depot configured to receive the analyzed motion data from the analytics engine, store the same as stored data, and transmit the stored data; and a validate component configured to receive the stored data from the data depot, compare the stored data with maternal perception input data obtained by a user interaction module to generate validated data, and to transmit the validated data to the data depot to be stored as additional stored data.
17. The system of claim 16, configured for operation upon a single device, the single device comprising: an input/output interface; a processor/memory storage/network interface; and the recording device; wherein the single device is selected from the group consisting of a smartphone, a tablet computer, and a laptop computer.
18. The system of claim 16, configured for operation upon a first device and a second device, the first device comprising an input/output interface and a processor/memory storage/network interface, and the second device comprising the recording device, wherein the first device is configured to receive the audio and/or visual data from the second device, and wherein the first device is selected from the group consisting of a smartphone, a tablet computer, and a laptop computer.
19. The system of claim 16, configured for operation upon a first device, a second device, and a third device, the first device comprising an input/output interface, the second device comprising the recording device, and the third device comprising a processor/memory storage/network interface, wherein the third device is configured to receive the audio and/or visual data from the second device, store the audio and/or visual data as the stored data, and transmit the stored data to the first device, and wherein the first device comprises a smartphone or a tablet, and wherein the third device comprises a laptop computer or a desktop computer.
20. A method of determining fetal well-being, comprising the steps of: operating a recording device configured to obtain audio and/or visual data when directed toward a womb of a pregnant woman and further configured to transmit the audio and/or visual data; operating a motion processing engine configured to receive the audio and/or visual data, detect motion within the audio and/or visual data, and generate motion data based upon the motion detected within the audio and/or visual data; operating an analytics engine configured to receive the motion data from the motion processing engine, analyze the motion data to determine of the motion data corresponds to a one or more of fetal motion, maternal motion, or other motion, and generate analyzed motion data corresponding to the one or more of fetal motion, maternal motion, or other motion; and operating a data depot configured to receive the analyzed motion data from the analytics engine, store the same as stored data, and transmit the stored data, wherein the stored data is indicative of fetal well-being.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0030] The disclosed embodiments and other features, advantages, and disclosures contained herein, and the matter of attaining them, will become apparent and the present disclosure will be better understood by reference to the following description of various exemplary embodiments of the present disclosure taken in conjunction with the accompanying drawings, wherein:
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038] An overview of the features, functions and/or configurations of the components depicted in the various figures will now be presented. It should be appreciated that not all of the features of the components of the figures are necessarily described. Some of these non-discussed features, such as various couplers, etc., as well as discussed features are inherent from the figures themselves. Other non-discussed features may be inherent in component geometry and/or configuration.
DETAILED DESCRIPTION
[0039] For the purposes of promoting an understanding of the principles of the present disclosure, reference will now be made to the embodiments illustrated in the drawings, and specific language will be used to describe the same. It will nevertheless be understood that no limitation of the scope of this disclosure is thereby intended.
[0040]
[0041] Exemplary systems 100 of the present disclosure can use one or more audio/visual recording devices 104, configured as video cameras that obtain video and/or audio, microphones, and the like. Audio/visual recording devices 104 that can only obtain video data would stream its audio/visual data stream 105 only as video. Audio/visual recording devices 104 that can only obtain audio data would stream its audio/visual data stream 105 only as audio. Audio/visual recording devices 104 that can obtain audio and video data would stream its audio/visual data stream 105 as audio and video/visual data.
[0042] As noted above, exemplary system 100 embodiments include a motion processing engine 106. Said component processes an input audio/visual data stream 105a frame-by-frame, in at least one embodiment, or as otherwise may be desired. Motion processing engine 106 is configured to detect for any motion, for example, and in particular motion on the surface of the womb.
[0043] Exemplary systems 100 of the present disclosure further comprise/include an analytics engine 108. When motion has been detected on the surface of the womb, such as by motion processing engine, analytics engine 108 in distinguishes that movement as fetal movement, maternal movement, or unknown, as analytics engines 108 of the present disclosure are configured to distinguish between or among said movements. Analytics engines 108 are further configured to then generate a representation of the movement (also referred to herein as movement representation(s), which can be or comprise at least part of analyzed motion data 109) and transmit the same along to data depot 110 for logging.
[0044] As referenced above, exemplary systems 100 of the present disclosure comprise/include a data depot 110. Data depots 110, in various embodiments, are configured to accept a raw input audio/visual data feed (such as audio/visual data streams 105, 105a, 105b, etc.) from audio/visual recording device 104 and store the same, as may be desired, for later retrieval. Data depots 110, in various embodiments, are also configured to accept movement representation(s) (analyzed motion data 109) from analytics engine 108 and store them for later retrieval, as may be desired. Data depots 100, in various embodiments, are also configured to receive and/or record input from a user interaction module 114 as a record of maternal perception, for example. Data depots 110, in various embodiments, are also configured to store validation data from a validate component 112, as referenced in further detail herein.
[0045] Exemplary system 100 embodiments, such as shown in
[0046] Various system 100 embodiments can also include/comprise a user interaction module 114 component. User interaction module 114 component acts as an Input and Output interface for the user 102, as referenced in further detail herein. As an output interface, it is configured to display system 100 output, such as representations of system 100 detected movement. As an input interface, for example, it can allow a user 102 to manually record perceptions, such as that of a fetal movement.
[0047] In view of the foregoing, and for example, an exemplary audio/visual recording device 104 can record a womb surface, and generate a audio/visual data stream 105, 105a, 105b, and/or 105c (which can contain the same “raw” audio/visual data streaming from audio/visual recording device 104, noting that the differences in reference numbers indicate different paths that the audio/visual data stream 105 can take, namely from audio/visual recording device 104 to any of motion processing engine 106 (via audio/visual data stream 105a), to data depot 110 (via audio/visual data stream 105b), and/or to analytics engine 108 (via audio/visual data stream 105c) containing said recorded information). As shown in
[0048] Stored data 111 can be transmitted to validate component 112 as stored data 111a, for example, whereby validate component 112 is configured to receive stored data 111a and to process said stored data 111a in a way to determine whether or not it is accurate and to what extent, if desired, so to generate validated data 113, which can be transmitted back to data depot 110 to be stored itself as stored data 111. Stored data 111 can also be transmitted to user interaction module 114 as stored data 111b, such as to be displayed in one form or another to a user, which can also be transmitted back to data depot 110 as the same stored data 111b or altered stored data 111c, such as in a case where user interaction module modifies stored data 111b in some respect.
[0049]
[0050] Implementation of various system 100 embodiments includes operation of at least three separate hardware components (I/O, P/M/N, VC), where I/O=Input/Output Interface, P/M/N=Processor/Memory Storage/Network Interface, and VC=Audio/Video Recording Device. These hardware components can reside on the same device or on separate devices, as noted below. P/M/N, as referenced herein, comprises a processor (a computer), memory and/or storage (such as RAM, ROM, a hard drive, flash memory, etc., known and used for data storage), and a network interface configured to connect one or more devices 212, 214, 216 and/or a user interaction module 114 of the present disclosure to one another over a network. As shown in the block component diagram of
[0051] In a first embodiment, shown in column A of
[0052] In the second embodiment, shown in column B of
[0053] In the third embodiment, shown in column C of
[0054] In view of the foregoing, exemplary systems 100 of the present disclosure can use any number of devices 212, 214, 216, etc., which can individually or collectively perform each of the I/O, P/M/N, and VC functions. Column A of
[0055]
[0056]
[0057] The various systems 100 herein can be used to determine fetal well-being such as by way of obtaining raw data using a audio/visual recording device (input audio/visual data stream 105, 105a, 105b, and/or 105c), generating motion detection data 107, generating analyzed motion data 109, and/or generating validated data 113, which can be displayed in user interaction module 114 or otherwise be made available to a user of system 100 (or portions thereof). Said data can identify movements that are attributed to the fetus and not attributed to the mother or other movement, and said fetal movement data can be analyzed and/or displayed, and potentially compared to benchmarks relating to fetal movement or lack thereof, to determine fetal well-being. For example, if certain benchmarks identify frequency and/or extent/strength of fetal movement, and data obtained from system 100 identifies fetal movement frequency and/or extent/strength that meet said benchmarks, then a determination could be made that based upon said data from system 100 that the fetus makes appropriate movements. Conversely, if data obtained from system 100 identifies fetal movement frequency and/or extent/strength that does not meet said benchmarks, such as less frequent movement and/or weaker movements, then a determination could be made that based upon said data from system 100 that the fetus may have a compromised well-being. Furthermore, should benchmarks identifying frequency and/or extent/strength of fetal movement as being related to one or more fetal conditions be met by data obtained by system 100, diagnoses of one or more fetal conditions could be made based upon said data, and a treatment plan could be generated/determined based upon said diagnoses.
[0058]
[0059] While various embodiments of systems and devices for identifying fetal movements in a audio/visual data feed and using the same to assess fetal well-being and other methods of using the same have been described in considerable detail herein, the embodiments are merely offered as non-limiting examples of the disclosure described herein. It will therefore be understood that various changes and modifications may be made, and equivalents may be substituted for elements thereof, without departing from the scope of the present disclosure. The present disclosure is not intended to be exhaustive or limiting with respect to the content thereof.
[0060] Further, in describing representative embodiments, the present disclosure may have presented a method and/or a process as a particular sequence of steps. However, to the extent that the method or process does not rely on the particular order of steps set forth therein, the method or process should not be limited to the particular sequence of steps described, as other sequences of steps may be possible. Therefore, the particular order of the steps disclosed herein should not be construed as limitations of the present disclosure. In addition, disclosure directed to a method and/or process should not be limited to the performance of their steps in the order written. Such sequences may be varied and still remain within the scope of the present disclosure.