REMOTE MEDICAL EXAMINATION
20220104688 · 2022-04-07
Inventors
Cpc classification
A61B5/7282
HUMAN NECESSITIES
A61B1/04
HUMAN NECESSITIES
G16H50/20
PHYSICS
A61B1/07
HUMAN NECESSITIES
A61B5/0084
HUMAN NECESSITIES
G16H20/10
PHYSICS
A61B1/00108
HUMAN NECESSITIES
A61B5/6898
HUMAN NECESSITIES
International classification
A61B1/00
HUMAN NECESSITIES
A61B1/04
HUMAN NECESSITIES
A61B1/07
HUMAN NECESSITIES
A61B5/00
HUMAN NECESSITIES
Abstract
A platform, tips, and otoscope systems are described herein that can aid with evaluation of human ears (specifically children's ears), diagnose middle ear disease, and suggest appropriate treatments. The platform can serve as an end-to-end evaluation, treatment, and delivery of treatment to a user without requiring an office visit, and improves the accuracy of such systems and ease-of-use.
Claims
1. A system for at-home imaging of a child's ear canal and diagnosis of middle ear disease, the system comprising: a light source configured to illuminate the ear canal; a camera configured to provide downward-and-forward facing images of the ear canal; an otoscope; and a web-based application configured to: receive the downward-and-forward facing images from the camera; autonomously capture the downward-and-forward facing images depicting at least a portion of a tympanic membrane; and autonomously classify the at least portion of the tympanic membrane from the captured downward-and-forward facing images to indicate the status of the middle ear.
2. The system of claim 1, wherein the camera is located within a smartphone otoscope attachment.
3. The system of claim 1, wherein the web-based application is configured to provide an output indicative of a normal ear, middle ear fluid (“otitis media with effusion”), middle ear infection (“acute otitis media”), or insufficient image capture.
4. The system of claim 1, wherein the web-based application comprises a machine learning software component that has been trained with images of the same resolution as the camera.
5. The system of claim 4, wherein the software has been trained with images that include a rim of an at-home otoscope tip.
6. The system of claim 4, wherein the software has been trained with images taken at a variety of angles and orientations.
7. The system of claim 4, wherein the software has been trained with images that include cerumen at least partially blocking the view of the tympanic membrane.
8. The system of claim 1, further comprising a coating applied to the at least one otoscope tip, the coating configured to dissolve, displace, and repel obstructing cerumen.
9. The system of claim 1, wherein the at least one otoscope tip can be used for both ears.
10. The system of claim 1, wherein the at least one otoscope tip includes an otoscope tip configured for use with a left ear and an otoscope tip configured for use with a right ear.
11. The system of claim 1, wherein the at least otoscope tip includes a plurality of otoscope tips.
12. The system of claim 1, wherein the otoscope tip accommodates the mean angle of the anterior ear canal (148 degrees) and the mean angle of the inferior ear canal (146 degrees).
13. The system of claim 1, wherein the otoscope tip is configured to suspend the otoscope in the ear canal.
14. An otoscope tip that is mountable onto an otoscope, the tip comprising: a distal end configured to receive an otoscope; a substantially conical portion that narrows as it extends from the distal end; and a substantially cylindrical portion coupled to the substantially cylindrical portion to a proximal end, wherein the otoscope tip comprises an optical pathway extending through the substantially conical portion and the substantially cylindrical portion, wherein an anterior angle formed between the substantially conical portion and the substantially cylindrical portion is between about 146 and about 148 degrees, and wherein an interior angle of the optical pathway between the substantially conical portion and the substantially cylindrical portion is between about 165 and about 170 degrees.
15. The otoscope tip of claim 14, further comprising a coating configured to dissolve cerumen.
16. An automated method for at-home imaging of the ear canal and diagnosis of middle ear, the method comprising: attaching an otoscope tip to an otoscope, the otoscope tip comprising: a distal end configured to receive an otoscope; a substantially conical portion that narrows as it extends from the distal end; and a substantially cylindrical portion coupled to the substantially cylindrical portion to a proximal end, wherein the otoscope tip comprises an optical pathway extending through the substantially conical portion and the substantially cylindrical portion, wherein an anterior angle formed between the substantially conical portion and the substantially cylindrical portion is between about 146 and about 148 degrees, and wherein an interior angle of the optical pathway between the substantially conical portion and the substantially cylindrical portion is between about 165 and about 170 degrees; guiding the otoscope into the ear canal; illuminating the ear canal with a light source; obtaining images of the ear canal via the otoscope; providing a web-based application configured to: receive the downward-and-forward facing images from the camera; autonomously capture the downward-and-forward facing images depicting at least a portion of a tympanic membrane; and autonomously classify the at least portion of the tympanic membrane from the captured downward-and-forward facing images to indicate the status of the middle ear.
17. The automated method of claim 16, wherein the web-based application is configured to provide an output indicative of a normal ear, middle ear fluid (“otitis media with effusion”), or middle ear infection (“acute otitis media”), or insufficient image capture.
18. The automated method of claim 16, wherein the web-based application comprises a machine learning software component that has been trained with images of the same resolution as the camera.
19. The automated method of claim 18, wherein the software has been trained with images that include a rim of an at-home otoscope tip.
20. The automated method of claim 18, wherein the software has been trained with images taken at a variety of angles and orientations.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
DETAILED DESCRIPTION OF THE DRAWINGS
[0020] In general, described herein are apparatuses and methods for the visual detection of the tympanic membrane (TM) and diagnosis of the middle ear, specifically that of a child. These apparatuses and methods are configured for use by a non-healthcare professional, such as a parent or caregiver. One such system described herein comprises a camera, light, otoscope, otoscope tip, and web-based application. The otoscope may be an otoscope typically used in a clinical setting or a smartphone otoscope attachment. The camera and/or light may be part of the traditional otoscope or may be the smartphone camera and light. Various embodiments of the otoscope tip exist, including (but not limited to) those illustrated in
[0021] One embodiment of otoscope tip 100 is shown in
[0022] In general, the most useful information for identifying an ear infection is “downward-and-forward facing” image capture. By “forward,” this disclosure refers to photographs facing toward the face from the inside of the ear canal. By “downward,” this disclosure refers to photographs facing towards the bottom of the patient's ear when the patient is upright/standing. Some specific examples of sizes and angles that can be used to obtain such images are described with respect to
[0023] Additionally, the tips described herein at
[0024] In alternative embodiments illustrated in
[0025] Rather than bend around the anterior and inferior angles of the ear canal, alternative embodiments of the otoscope tip position the otoscope in optimal position, as shown in
[0026] As shown in
[0027] An embodiment like that illustrated in
[0028] Otoscope tip 100 in
[0029] Instructions referring to the body's anatomy or general spatial directions may be provided on the surface of the device to orient the appropriate position in the ear. Instructions may be written and/or pictorial. General spatial directions can include written words or symbols. There can be one or more sets of orienting images, symbols, or directions. Each set of orienting images, symbols, or directions can be color coded such that the color corresponds to instructions for one ear, and a separate color for the opposite ear. Providing instructions increases user ease and comfort as well as aids the success of auto-capture and visual classification.
[0030] Prior to using the otoscope, the parent (or other user) completes a medical history questionnaire, as indicated by the flow diagram in
[0031] Machine learning-enabled, home diagnostics for middle ear disease is novel, transformative, and disruptive. Current state of the art for at home diagnostics consists of healthcare providers struggling to see the ear drum through telemedicine or on-call providers prescribing antibiotics for a presumed infection without having examined the ear drum.
[0032] Success of a platform such as is disclosed herein hinges on its accuracy and its usability. Accurate outputs depend on accurate inputs. Accurate labeling of training images only occurs when the middle ear status is defined by findings when a myringotomy is made (incision in the ear drum) or when the middle ear space is aspirated with a needle through the ear drum. For example, this can be achieved by photographing the ear drum directly before an incision is made in it for placing ear tubes. Once the incision is made, the contents of the middle ear space will come through and are visible to the ENT surgeon. This allows for 100% accurate labeling of the image as being normal, having fluid, or having infection in the middle ear space. The presence of fluid is the definition of “otitis media with effusion” and the presence of infected fluid is the definition of “acute otitis media.” The latter is treated with antibiotics while the former is not. It is believed that misdiagnosis of infection and over prescription of antibiotics is a significant contributor to antibiotic resistance within society.
[0033] The training images should include “real world” or high fidelity compared to what a parent can achieve at home in addition to images that physicians can achieve in the operating room or in the office. Manipulation of images to replicate various angles, out of focus, portions of ear drums (rather than entire ear drum), and images with wax partially obstructing the view of the ear drum are all ways that the training images can be used to help replicate real world images that parents can achieve at home. Multiple photographs can be taken of each ear drum to build the training image set. They can be taken before any ear wax is removed so that the native state of the ear canal can be captured. If no portion of the ear drum can be seen, then these images can be labeled as “cannot assess” so that when interacting with the platform, a child will be referred appropriately to their healthcare provider for an assessment rather than the algorithm attempting to label with a diagnosis. An image can be taken after the ear is cleaned to reveal the entire ear canal and ear drum. The images can also be taken with high definition surgical instruments such as endoscopic cameras or they can be taken with a commercially available smart phone otoscope attachment. The latter can provide an image quality similar to what parents could achieve in the home setting. If there is reasonable fidelity between future home images and the images that the algorithm is trained and tested with, the accuracy of the algorithm found in testing should translate to home use.
[0034] It should be understood that various aspects disclosed herein may be combined in different combinations than the combinations specifically presented in the description and accompanying drawings. It should also be understood that, depending on the example, certain acts or events of any of the processes or methods described herein may be performed in a different sequence, may be added, merged, or left out altogether (e.g., all described acts or events may not be necessary to carry out the techniques). In addition, while certain aspects of this disclosure are described as being performed by a single module or unit for purposes of clarity, it should be understood that the techniques of this disclosure may be performed by a combination of units or modules associated with, for example, a medical device.
[0035] In one or more examples, the described techniques may be implemented in hardware, software, firmware, or any combination thereof. If implemented in software, the functions may be stored as one or more instructions or code on a computer-readable medium and executed by a hardware-based processing unit. Computer-readable media may include non-transitory computer-readable media, which corresponds to a tangible medium such as data storage media (e.g., RAM, ROM, EEPROM, flash memory, or any other medium that can be used to store desired program code in the form of instructions or data structures and that can be accessed by a computer).
[0036] Instructions may be executed by one or more processors, such as one or more digital signal processors (DSPs), general purpose microprocessors, application specific integrated circuits (ASICs), field programmable logic arrays (FPGAs), or other equivalent integrated or discrete logic circuitry. Accordingly, the term “processor” as used herein may refer to any of the foregoing structure or any other physical structure suitable for implementation of the described techniques. Also, the techniques could be fully implemented in one or more circuits or logic elements.