MEDICAL PROCEDURE DOCUMENTATION SYSTEM AND METHOD
20230073975 · 2023-03-09
Inventors
Cpc classification
H04N23/66
ELECTRICITY
H04N7/18
ELECTRICITY
G16H20/40
PHYSICS
G16H15/00
PHYSICS
G16H10/60
PHYSICS
H04N9/8205
ELECTRICITY
International classification
G16H10/60
PHYSICS
H04N7/18
ELECTRICITY
Abstract
The present invention relates to a system and method for documenting medical procedures. The system can include an audio recording system and an image recording system. An actuator can be configured to actuate each of the audio recording system and the image recording system, wherein the audio recording system is configured to be actuated after the image recording system. A control system can be configured to receive input from each of the audio recording system and image recording system.
Claims
1. A medical procedure documentation system for capturing audio and images, comprising: an audio recording system; an image recording system; an actuator configured to actuate each of the audio recording system and the image recording system; and a control system configured to receive input from each of the audio recording system and the image recording system.
2. The medical procedure documentation system of claim 1, wherein the audio recording system includes an audio input, a processor, a memory device, and an audio activator.
3. The medical procedure documentation system of claim 2, wherein the audio input is a clinical documentation client electronic device.
4. The medical procedure documentation system of claim 3, wherein the clinical documentation client electronic device includes at least one of a lapel microphone, an embedded microphone, and an audio recording device.
5. The medical procedure documentation system of claim 1, wherein the image recording system includes an image input, a control device, a storage device, and an image activator.
6. The medical procedure documentation system of claim 5, wherein the image recording system further includes an image output.
7. The medical procedure documentation system of claim 6, wherein the image output is a monitor configured to display real time images.
8. The medical procedure documentation system of claim 5, wherein the image input is a video camera configured to capture video and still images.
9. The medical procedure documentation system of claim 5, wherein the image input is a camera configured to capture still images.
10. The medical procedure documentation system of claim 1, wherein the actuator includes a first switch configured to actuate the audio recording system and a second switch configured to actuate the image recording system.
11. The medical procedure documentation system of claim 10, wherein the actuator further includes a third switch configured to simultaneously actuate the audio recording system and the image recording system.
12. The medical procedure documentation system of claim 1, wherein the control system is configured to translate the audio from the audio recording system into text.
13. The medical procedure documentation system of claim 1, wherein the control system utilizes a machine learning process to associate related audio with a related image.
14. The medical procedure documentation system of claim 1, wherein the control system labels a procedure or a procedural step.
15. A method for documenting medical procedure information via audio and images, the method comprising steps of: providing a medical procedure documentation system including an audio recording system; an image recording system; an actuator configured to actuate each of the audio recording system and the image recording system, wherein the audio recording system is configured to be actuated after the image recording system; and a control system configured to receive input from each of the audio recording system and the image recording system; performing a medical procedure; depressing the actuator to engage the image recording system to create an image; releasing the actuator to engage the audio recording system; dictating the medical procedure information to the audio recording system to create a procedural note; signaling to the audio recording system to end recording; and processing the image and the procedural note with the control system, wherein the image and the procedural note are aligned.
16. The method for documenting medical procedures of claim 15, wherein the audio recording system includes an audio input, a processor, a memory device, and an audio activator.
17. The method for documenting medical procedures of claim 15, wherein the image recording system includes an image input, a control device, a storage device, and an image activator.
18. The method for documenting medical procedures of claim 15, wherein the actuator includes a first switch configured to actuate the audio recording system and a second switch configured to actuate the image recording system.
19. The method for documenting medical procedures of claim 18, wherein the actuator further includes a third switch configured to simultaneously actuate the audio recording system and the image recording system and further comprising pushing the third switch to simultaneously actuate the audio recording system and the image recording system.
20. The method for documenting medical procedures of claim 15, wherein the control system utilizes a machine learning process to associate related audio with a related image and further comprising associating the related audio with the related image using the machine learning process.
Description
DRAWINGS
[0022] The drawings described herein are for illustrative purposes only of selected embodiments and not all possible implementations, and are not intended to limit the scope of the present disclosure.
[0023]
[0024]
[0025]
[0026]
DETAILED DESCRIPTION
[0027] The following description of technology is merely exemplary in nature of the subject matter, manufacture and use of one or more inventions, and is not intended to limit the scope, application, or uses of any specific invention claimed in this application or in such other applications as may be filed claiming priority to this application, or patents issuing therefrom. Regarding methods disclosed, the order of the steps presented is exemplary in nature, and thus, the order of the steps can be different in various embodiments, including where certain steps can be simultaneously performed, unless expressly stated otherwise. “A” and “an” as used herein indicate “at least one” of the item is present; a plurality of such items may be present, when possible. Except where otherwise expressly indicated, all numerical quantities in this description are to be understood as modified by the word “about” and all geometric and spatial descriptors are to be understood as modified by the word “substantially” in describing the broadest scope of the technology. “About” when applied to numerical values indicates that the calculation or the measurement allows some slight imprecision in the value (with some approach to exactness in the value; approximately or reasonably close to the value; nearly). If, for some reason, the imprecision provided by “about” and/or “substantially” is not otherwise understood in the art with this ordinary meaning, then “about” and/or “substantially” as used herein indicates at least variations that may arise from ordinary methods of measuring or using such parameters.
[0028] Although the open-ended term “comprising,” as a synonym of non-restrictive terms such as including, containing, or having, is used herein to describe and claim embodiments of the present technology, embodiments may alternatively be described using more limiting terms such as “consisting of” or “consisting essentially of.” Thus, for any given embodiment reciting materials, components, or process steps, the present technology also specifically includes embodiments consisting of, or consisting essentially of, such materials, components, or process steps excluding additional materials, components or processes (for consisting of) and excluding additional materials, components or processes affecting the significant properties of the embodiment (for consisting essentially of), even though such additional materials, components or processes are not explicitly recited in this application. For example, recitation of a composition or process reciting elements A, B and C specifically envisions embodiments consisting of, and consisting essentially of, A, B and C, excluding an element D that may be recited in the art, even though element D is not explicitly described as being excluded herein.
[0029] As referred to herein, disclosures of ranges are, unless specified otherwise, inclusive of endpoints and include all distinct values and further divided ranges within the entire range. Thus, for example, a range of “from A to B” or “from about A to about B” is inclusive of A and of B. Disclosure of values and ranges of values for specific parameters (such as amounts, weight percentages, etc.) are not exclusive of other values and ranges of values useful herein. It is envisioned that two or more specific exemplified values for a given parameter may define endpoints for a range of values that may be claimed for the parameter. For example, if Parameter X is exemplified herein to have value A and also exemplified to have value Z, it is envisioned that Parameter X may have a range of values from about A to about Z. Similarly, it is envisioned that disclosure of two or more ranges of values for a parameter (whether such ranges are nested, overlapping or distinct) subsume all possible combination of ranges for the value that might be claimed using endpoints of the disclosed ranges. For example, if Parameter X is exemplified herein to have values in the range of 1-10, or 2-9, or 3-8, it is also envisioned that Parameter X may have other ranges of values including 1-9, 1-8, 1-3, 1-2, 2-10, 2-8, 2-3, 3-10, 3-9, and so on.
[0030] When an element or layer is referred to as being “on,” “engaged to,” “connected to,” or “coupled to” another element or layer, it may be directly on, engaged, connected or coupled to the other element or layer, or intervening elements or layers may be present. In contrast, when an element is referred to as being “directly on,” “directly engaged to,” “directly connected to” or “directly coupled to” another element or layer, there may be no intervening elements or layers present. Other words used to describe the relationship between elements should be interpreted in a like fashion (e.g., “between” versus “directly between,” “adjacent” versus “directly adjacent,” etc.). As used herein, the term “and/or” includes any and all combinations of one or more of the associated listed items.
[0031] Although the terms first, second, third, etc. may be used herein to describe various elements, components, regions, layers and/or sections, these elements, components, regions, layers and/or sections should not be limited by these terms. These terms may be only used to distinguish one element, component, region, layer or section from another region, layer or section. Terms such as “first,” “second,” and other numerical terms when used herein do not imply a sequence or order unless clearly indicated by the context. Thus, a first element, component, region, layer or section discussed below could be termed a second element, component, region, layer or section without departing from the teachings of the example embodiments.
[0032] Spatially relative terms, such as “inner,” “outer,” “beneath,” “below,” “lower,” “above,” “upper,” and the like, may be used herein for ease of description to describe one element or feature's relationship to another element(s) or feature(s) as illustrated in the figures. Spatially relative terms may be intended to encompass different orientations of the device in use or operation in addition to the orientation depicted in the figures. For example, if the device in the figures is turned over, elements described as “below” or “beneath” other elements or features would then be oriented “above” the other elements or features. Thus, the example term “below” can encompass both an orientation of above and below. The device may be otherwise oriented (rotated 90 degrees or at other orientations) and the spatially relative descriptors used herein interpreted accordingly.
[0033] The present technology relates to a medical procedure documentation system 100, and method 200 for documenting medical procedure information, shown generally in the accompanying figures.
[0034] As shown in
[0035] The audio recording system 102 can be configured to record audio from an operator 103. The audio can include dictations regarding various types of information. More specifically, the information can include information collected or related to triage, pathology, treatment, and/or testing, as based on what the operator 103 is experiencing as well as visualizing during the procedure. The audio can also include descriptions of treatments or interventions performed during a procedure. With reference to
[0036] With reference to
[0037] As a non-limiting example, the audio input 110 can include: one or more clinical documentation client electronic devices (e.g., clinical documentation client electronic device, examples of which may include but are not limited to a handheld microphone, a lapel microphone, an embedded microphone (such as those embedded within eyeglasses, smart phones, tablet computers and/or watches) and an audio recording device). A skilled artisan can select other suitable audio inputs 110 within the scope of the present disclosure.
[0038] As shown in
[0039] The memory device 118 can be configured to store audio dictated by the operator during a procedure and the audio can later be translated by the control system 108 and coupled and temporally aligned with an image recorded by an image recording system 104. As a non-limiting example, the memory device 118 can include any combination of one or more random access memories (RAMS), read-only memories (ROMs) (which may be programmable), flash memory, and/or other similar storage devices. A skilled artisan can select any combination of memory device 118 within the scope of the present disclosure.
[0040] With reference to
[0041] The image recording system 104 can be configured to record a portion of the procedure. The image recording system 104 can also record a fluoroscopic segment or run, depending on the type of procedure. The image recording system 104 can provide the ability to capture live video and video frames as still images. With reference to FIG. XXX, the image recording system 104 can include an image input 120, a board 122, and an image activator 124.
[0042] As shown in
[0043] With reference to
[0044] The storage device 130, shown in
[0045] With reference to
[0046] Each of the audio recording system 102 and the image recording system 104 can be coupled to and activated by the actuator 106. The actuator 106 can be a physical actuator as shown in
[0047] With reference to
[0048] With reference to
[0049] It should be appreciated that the actuation of the audio recording system 102 can function in a manner referred to as a “reverse dead man switch,” which can promote privacy of a patient undergoing a procedure. Advantageously, the system 100 of the present disclosure does not rely on ambient listening to actuate the system 100. Undesirably, such ambient listening systems can capture patient information that was not meant to be recorded and is private to the patient. Alternatively, the present system 100 can allow the operator 103 to start capturing images, while providing a stop gap before the dictated notes are recorded and allows the operator 103 to minimize any unnecessary recordings.
[0050] The recorded images from the image recording system 104 and the dictated notes recorded by the audio recording system 102 can be processed by the control system 108. The control system 108 can be configured to decouple or otherwise separate the recorded audio and the recorded images during processing, while also appropriately associating the recorded audio with images so the data can be temporally aligned or correlated.
[0051] The control system 108 can be coupled to the audio recording system 102 and the image recording system 104 and can be configured to translate the spoken audio into text. The control system 108 can also be configured to produce still images from the captured images where video was recorded. The control system 108 can associate the still images and the written text such that the written text is temporally aligned with the still image related to the written text as dictated during the procedure. As a non-limiting example, the control system 108 can utilize a machine learning process to associate related text with related images. In other embodiments, the control system 108 can also label particular procedures, procedural steps, or runs, for example fluoroscopic runs performed by a radiologist, and can further militate against the need for technologist intervention during the procedure.
[0052] Advantageously, the system 100 can allow the operator to record dictated notes contemporaneously with the performed procedure. This can allow for more accurate notes while also eliminating additional documentation steps the operator 103 typically performs after the procedure is complete. Advantageously, the documentation system 100 can allow an operator 103 to dictate notes during a medical procedure which can later be coupled and temporally aligned with an image recorded by an image recording system 104. The temporally aligned notes and images can then be used for medical documentation purposed, such as billing.
[0053] The present disclosure further contemplates a method 200 for documenting a medical procedure, shown in
[0054] The method can further include a medical documentation system 100 having a third switch 136 configurated configured to simultaneously actuate the audio recording system 102 and the image recording system 104. In a step 216, the third switch 136 can be pushed to simultaneously actuate the audio recording system 102 and the image recording system 104.
[0055] The method can also further include a medical documentation system 100 wherein the control system utilizes a machine learning process to associate related audio with a related image. In a step 218, the control system 108 can associate the related audio with the related image using the machine learning process.
[0056] Advantageously, the documentation system 100 can allow an operator 103 to dictate notes during a medical procedure which can later be coupled and temporally aligned with an image recorded by an image recording system 104. The temporally aligned notes and images can then be used for medical documentation purposed, such as billing.
[0057] Example embodiments are provided so that this disclosure will be thorough, and will fully convey the scope to those who are skilled in the art. Numerous specific details are set forth such as examples of specific components, devices, and methods, to provide a thorough understanding of embodiments of the present disclosure. It will be apparent to those skilled in the art that specific details need not be employed, that example embodiments may be embodied in many different forms, and that neither should be construed to limit the scope of the disclosure. In some example embodiments, well-known processes, well-known device structures, and well-known technologies are not described in detail. Equivalent changes, modifications and variations of some embodiments, materials, components, and methods can be made within the scope of the present technology, with substantially similar results.