TIME SERIES DATA CONVERSION FOR MACHINE LEARNING MODEL APPLICATION
20230165505 · 2023-06-01
Inventors
Cpc classification
G16H50/20
PHYSICS
G16H50/70
PHYSICS
International classification
A61B5/00
HUMAN NECESSITIES
G16H50/20
PHYSICS
Abstract
Techniques are described herein for converting time series data such as electrocardiogram (“ECG”) data into forms suitable for application across machine learning models, and for applying those converted data as input across machine learning models to, for instance, determine health conditions of underlying subjects. In various embodiments, a two-dimensional image may be generated (601) based on vectorcardiography (“VCG”) data, wherein the VCG data is measured directly or is based on electrocardiogram (“ECG”) data measured from a subject. The two-dimensional image may be applied (612) as input across a machine learning model to generate output, wherein the machine learning model is configured for use in processing two-dimensional images. A health condition of the subject may be determined (614) based on the output.
Claims
1. A method implemented using one or more processors, comprising: generating a two-dimensional image based on vectorcardiography (“VCG”) data, wherein the VCG data is recorded directly or is based on electrocardiogram (“ECG”) data measured from a subject; applying the two-dimensional image as input across a machine learning model to generate output, wherein the machine learning model is configured for use in processing two-dimensional images; and determining a health condition of the subject based on the output.
2. The method of claim 1, wherein the ECG data comprises multiple waveforms corresponding to multiple ECG leads.
3. The method of claim 2, further comprising converring each of the multiple waveforms into a respective single representative beat.
4. The method of claim 3, further comprising converting the multiple representative beats into three VCG beats, wherein each VCG beat corresponds to a heart vector in one dimension of three-dimensional (“3D”) space.
5. The method of claim 4, further comprising upsampling (606) the three VCG beats.
6. The method of claim 4, further comprising determining three VCG projections, each VCG projection representing a respective one of the three VCG beats on a spatial plane corresponding to a respective dimension of the 3D space.
7. The method of claim 6, further comprising encoding the three VCG projections into three corresponding layers of the two-dimensional image.
8. The method of claim 7, wherein the three corresponding layers comprise red, green, and blue.
9. The method of claim 1, wherein the ECG data comprises single lead data obtained from a wearable device worn by the subject.
10. A device comprising a processor and memory, wherein the memory stores instructions that, in response to execution of the instructions by the processor, cause the device to: generate a two-dimensional image based on vectorcardiography (“VCG”) data, wherein the VCG data is either measured directly or is based on electrocardiogram (“ECG”) data measured from a subject; apply the multi-layered two-dimensional image as input across a machine learning model to generate output, wherein the machine learning model is configured for use in processing two-dimensional images; and determine a health condition of the subject based on the output.
11. The device of claim 10, wherein the device comprises a wearable device worn by the subject.
12. The device of claim 10, wherein the ECG data comprises multiple waveforms corresponding to multiple ECG leads.
13. The device of claim 12, further comprising instructions to: convert each of the multiple waveforms into a respective single representative beat; convert the multiple representative beats into a plurality of VCG beats, wherein each VCG beat corresponds to a heart vector in one dimension of multi-dimensional space; and determine a plurality of VCG projections, each VCG projection representing a respective one of the plurality of VCG beats on a spatial plane corresponding to a respective dimension of the multi-dimensional space.
14. The device of claim 13, further comprising instructions to upsample the plurality of VCG beats.
15. At least one non-transitory computer-readable medium comprising instructions that, in response to execution of the instructions by one or more processors, cause the one or more processors to perform the method of claim 1.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0013] In the drawings, like reference characters generally refer to the same parts throughout the different views. Also, the drawings are not necessarily to scale, emphasis instead generally being placed upon illustrating various principles of the embodiments described herein.
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
DETAILED DESCRIPTION
[0022] Modern artificial intelligence (“AI”) techniques such as deep learning have numerous applications, and image processing may be one of the most developed. While relatively adaptable across domains, these deep learning models may not be configured to processing time-series data such as electrocardiogram (“ECG”) waveforms. Moreover, AI models that process time-series data are more complex, less readily available, and even when available, are not easily adapted for new domains. In view of the foregoing, various embodiments and implementations of the present disclosure are directed to converting time series data, such as ECG waveforms, into forms suitable for application across non-time-series machine learning models.
[0023]
[0024] In
[0025] Techniques described herein are not limited to twelve-lead ECG data (or to ECG data at all, for that matter). For example, a second subject 100.sub.2 is being monitored by a wearable ECG device in the form of a smart watch 114. This smart watch 114 may provide ECG data of a lesser number of leads, e.g., one lead, to an intermediate computing device, such as a laptop 115 operated by second user 100.sub.2. Laptop 115 in turn may provide this ECG data to HIS 104 via network(s) 108 (or in some cases smart watch 114 may provide the ECG data directly to HIS 104 via network(s) 108).
[0026] A training system 120 and an inference system 124 may be implemented using any combination of hardware and software in order to create, manage, and/or apply machine learning model(s) stored in a machine learning (“ML”) model database 122. Training system 120 may be configured to apply training data such as two-dimensional images generated using techniques herein as input across one or more of the models in database 122 to generate output. The output generated using the training data may be compared to labels associated with the training data in order to determine error(s) associated with the model(s). A training example’s label may indicate, for instance, the presence and/or probability of a health condition in a subject from which the training example was generated. These error(s) may then be used, e.g., by training system 120, to train the model(s) using techniques such as back propagation and gradient descent (stochastic or otherwise).
[0027] Inference system 124 may be configured to use the trained machine learning model(s) in database 122 to infer health conditions of subjects based on two-dimensional imagery generated using techniques described herein. In some embodiments, training system 120 and/or inference system 124 may be implemented as part of a distributed computing system that is sometimes referred to as the “cloud,” although this is not required.
[0028]
[0029] In some embodiments, the ability to make these inferences may be provided as part of a software application that aids doctor 112 with diagnosis, e.g., a clinical decision support (“CDS”) application. In some such embodiments, doctor 112 may rely on the inference as a “second opinion” to buttress or challenge their own medical opinion. Alternatively, the inferences may be used as ECG screening tests to separate, for instance, normal from abnormal ECG signals in a presumed healthy population (e.g., students, athletes, soldiers, etc.) so that further investigation can be performed on those subjects having abnormal ECG signals. Additionally or alternatively, techniques described herein may be incorporated into medical equipment that also incorporates ECG signals, such as exercise stress testing machines, defibrillators, cardiographs, bedside monitors, and so forth.
[0030] In some embodiments, subjects 100.sub.1-2 themselves may take advantage of disclosed techniques to determine likelihoods that they have health conditions, e.g., whether their heart beats are normal or abnormal. For example, second subject 100.sub.2 may operate laptop 115 in order to interface with, and receive health condition inferences from, inference system 124. In some embodiments, e.g., to preserve privacy and/or respect privacy laws and regulations, one or more trained machine learning model(s) may be distributed from database 122 to remote computing devices (e.g., laptop 115). That way, the remote computing devices can perform selected aspects of the present disclosure on locally-obtained time-series data (e.g., ECG data) without having to transport the ECG data to, for instance, the “cloud.”
[0031]
[0032] In
[0033] In
[0034]
[0035] In addition, in some embodiments, directions of the individual VCG projections may be preserved, as shown in the arrows depicted as part of each of the projections 248R-B (also visible in the two-dimensional image 250). These directions may be leveraged as additional features that are provided as inputs to a CNN to make inferences about medical conditions of subjects.
[0036] By implementing the conversions depicted in
[0037] For example, some machine learning models may already be trained, e.g., with millions of images, to classify images into thousands of object categories (such as keyboard, coffee mug, pencil, various animals, etc.). Such a network may have learned rich feature representations for a wide range of images. Accordingly, such a network can be commandeered and fine-tuned to, for instance, classify an ECG signal as “normal” or “abnormal.”
[0038] An example of this is depicted in
[0039] As shown in the middle of
[0040] The following describes one non-limiting example of how transfer learning may be applied to leverage a preexisting model to classify ECG data (converted into two-dimensional imagery as described herein) as, for instance, normal or abnormal, or to infer other health conditions. These other health conditions may include, for instance, atrial fibrillation, heart murmur, hypertrophy, etc.
[0041] First, a pre-trained neural network 560 may be obtained. As shown in
[0042] In some embodiments, the training options and/or parameters may be set such that the new layers 564 may learn much faster than the transferred layers 562A. This may be achieved, for instance, by setting the initial learning rate for the transferred layers 562A to a smaller value compared to the newly added last three layers 564. When performing transfer learning, there may not be a need to train for as many epochs (an epoch is a full training cycle on the entire training dataset). This is because most of the network (extracted layers 562A) is already trained using a much larger training dataset.
[0043]
[0044] At 601, which includes blocks 602-610, the system may generate a two-dimensional image based on ECG data measured from a subject. In some embodiments, the operations of block 601 may be performed by inference system 124 or training system 120 depending on the circumstances. In other embodiments, they may be performed by a separate component, not depicted in
[0045] At block 602, the system may convert each of some number of waveforms of multi-lead ECG data into a respective single representative beat, as shown in
[0046] In some embodiments, at block 606, the system may upsample the (e.g., three) VCG beats converted at block 604, e.g., to improve spatial resolution. In other embodiments, this upsampling may be omitted. At block 608, the system may determine some number (e.g., three) of VCG projections based on the VCG beats. Each VCG projection may represent a respective one of the VCG beats on a spatial plane corresponding to a respective dimension of the 3D space. This was demonstrated in
[0047] At block 610, the system may encode the three VCG projections into three corresponding layers or channels of a two-dimensional image. For example, one VCG projection in one spatial plane may be encoded in a red channel of an RGB digital image, another may be encoded in a green channel of the RGB image, and another may be encoded in the blue channel of the RGB image.
[0048] After blocks 602-610, the system is ready to use the generated multi-layered two-dimensional image to either make an inference (if the model is already trained), or to train the model. For example, at block 612, the system, e.g., by way of inference system 124 or training system 120, may apply the multi-layered two-dimensional image as input across the machine learning model to generate output. At block 614 of
[0049] Alternatively, the machine learning may provide more than binary output. For example, in some embodiments, the machine learning model may generate, as output, a plurality of probabilities, each associated with a different health condition. In some such embodiments, the n (n≥1) health conditions have the highest probabilities may be presented to medical personnel, e.g., as part of graphical user interface, as natural language output through a speaker, as part of a report on the underlying subject, etc.
[0050] In other embodiments where the machine learning model is being trained, the output generated at block 612 may be compared to a label associated with the input data. For example, the input data may be ECG data that was previously labeled by medical personnel as being, for instance, “normal,” “abnormal,” or as exhibiting one or more health conditions. The difference, or error, between the output generated using the machine learning model and the label may then be used, e.g., by training system 120, to train the machine learning model.
[0051]
[0052] User interface input devices 722 may include a keyboard, pointing devices such as a mouse, trackball, touchpad, or graphics tablet, a scanner, a touchscreen incorporated into the display, audio input devices such as voice recognition systems, microphones, and/or other types of input devices. In general, use of the term “input device” is intended to include all possible types of devices and ways to input information into computing device 710 or onto a communication network.
[0053] User interface output devices 720 may include a display subsystem, a printer, a fax machine, or non-visual displays such as audio output devices. The display subsystem may include a cathode ray tube (CRT), a flat-panel device such as a liquid crystal display (LCD), a projection device, or some other mechanism for creating a visible image. The display subsystem may also provide non-visual display such as via audio output devices. In general, use of the term “output device” is intended to include all possible types of devices and ways to output information from computing device 710 to the user or to another machine or computing device.
[0054] Storage subsystem 724 stores programming and data constructs that provide the functionality of some or all of the modules described herein. For example, the storage subsystem 724 may include the logic to perform selected aspects of the method of
[0055] These software modules are generally executed by processor 714 alone or in combination with other processors. Memory 725 used in the storage subsystem 724 can include a number of memories including a main random access memory (RAM) 730 for storage of instructions and data during program execution and a read only memory (ROM) 732 in which fixed instructions are stored. A file storage subsystem 726 can provide persistent storage for program and data files, and may include a hard disk drive, a floppy disk drive along with associated removable media, a CD-ROM drive, an optical drive, or removable media cartridges. The modules implementing the functionality of certain implementations may be stored by file storage subsystem 726 in the storage subsystem 724, or in other machines accessible by the processor(s) 714.
[0056] Bus subsystem 712 provides a mechanism for letting the various components and subsystems of computing device 710 communicate with each other as intended. Although bus subsystem 712 is shown schematically as a single bus, alternative implementations of the bus subsystem may use multiple busses.
[0057] Computing device 710 can be of varying types including a workstation, server, computing cluster, blade server, server farm, or any other data processing system or computing device. Due to the ever-changing nature of computers and networks, the description of computing device 710 depicted in
[0058]
[0059] The column on the far right of the matrix shows the percentages of all the examples predicted to belong to each class that are correctly (bolded) and incorrectly (italicized) classified. These metrics are often called the precision (or positive predictive value) and false discovery rate, respectively. The row at the bottom of the plot shows the percentages of all the examples belonging to each class that are correctly and incorrectly classified. These metrics are often called the recall (or true positive rate or sensitivity) and false negative rate, respectively. The cell in the bottom right of the plot shows the overall accuracy.
[0060] The confusion matrix of
[0061] While several inventive embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the inventive embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the inventive teachings is/are used. Those skilled in the art will recognize, or be able to ascertain using no more than routine experimentation, many equivalents to the specific inventive embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, inventive embodiments may be practiced otherwise than as specifically described and claimed. Inventive embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the inventive scope of the present disclosure.
[0062] All definitions, as defined and used herein, should be understood to control over dictionary definitions, definitions in documents incorporated by reference, and/or ordinary meanings of the defined terms. The indefinite articles “a” and “an,” as used herein in the specification and in the claims, unless clearly indicated to the contrary, should be understood to mean “at least one.”
[0063] The phrase “and/or,” as used herein in the specification and in the claims, should be understood to mean “either or both” of the elements so conjoined, i.e., elements that are conjunctively present in some cases and disjunctively present in other cases. Multiple elements listed with “and/or” should be construed in the same fashion, i.e., “one or more” of the elements so conjoined. Other elements may optionally be present other than the elements specifically identified by the “and/or” clause, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, a reference to “A and/or B”, when used in conjunction with open-ended language such as “comprising” can refer, in one embodiment, to A only (optionally including elements other than B); in another embodiment, to B only (optionally including elements other than A); in yet another embodiment, to both A and B (optionally including other elements); etc.
[0064] As used herein in the specification and in the claims, “or” should be understood to have the same meaning as “and/or” as defined above. For example, when separating items in a list, “or” or “and/or” shall be interpreted as being inclusive, i.e., the inclusion of at least one, but also including more than one, of a number or list of elements, and, optionally, additional unlisted items. Only terms clearly indicated to the contrary, such as “only one of” or “exactly one of,” or, when used in the claims, “consisting of,” will refer to the inclusion of exactly one element of a number or list of elements. In general, the term “or” as used herein shall only be interpreted as indicating exclusive alternatives (i.e. “one or the other but not both”) when preceded by terms of exclusivity, such as “either,” “one of,” “only one of,” or “exactly one of.” “Consisting essentially of,” when used in the claims, shall have its ordinary meaning as used in the field of patent law.
[0065] As used herein in the specification and in the claims, the phrase “at least one,” in reference to a list of one or more elements, should be understood to mean at least one element selected from any one or more of the elements in the list of elements, but not necessarily including at least one of each and every element specifically listed within the list of elements and not excluding any combinations of elements in the list of elements. This definition also allows that elements may optionally be present other than the elements specifically identified within the list of elements to which the phrase “at least one” refers, whether related or unrelated to those elements specifically identified. Thus, as a non-limiting example, “at least one of A and B” (or, equivalently, “at least one of A or B,” or, equivalently “at least one of A and/or B”) can refer, in one embodiment, to at least one, optionally including more than one, A, with no B present (and optionally including elements other than B); in another embodiment, to at least one, optionally including more than one, B, with no A present (and optionally including elements other than A); in yet another embodiment, to at least one, optionally including more than one, A, and at least one, optionally including more than one, B (and optionally including other elements); etc.
[0066] It should also be understood that, unless clearly indicated to the contrary, in any methods claimed herein that include more than one step or act, the order of the steps or acts of the method is not necessarily limited to the order in which the steps or acts of the method are recited.
[0067] In the claims, as well as in the specification above, all transitional phrases such as “comprising,” “including,” “carrying,” “having,” “containing,” “involving,” “holding,” “composed of,” and the like are to be understood to be open-ended, i.e., to mean including but not limited to. Only the transitional phrases “consisting of” and “consisting essentially of” shall be closed or semi-closed transitional phrases, respectively, as set forth in the United States Patent Office Manual of Patent Examining Procedures, Section 2111.03. It should be understood that certain expressions and reference signs used in the claims pursuant to Rule 6.2(b) of the Patent Cooperation Treaty (“PCT”) do not limit the scope.