STONE IDENTIFICATION METHODS AND SYSTEMS
20220133255 · 2022-05-05
Assignee
Inventors
- Peter J. Pereira (Mendon, MA, US)
- Michael S. H. Chu (Brookline, MA)
- Elizabeth Stokley (Baltimore, MD, US)
- David Salto (Hopedale, MA, US)
- Candace Rhodes (Walpole, MA, US)
Cpc classification
A61B8/12
HUMAN NECESSITIES
A61B17/2255
HUMAN NECESSITIES
G06F3/0488
PHYSICS
A61B5/1076
HUMAN NECESSITIES
A61B5/20
HUMAN NECESSITIES
A61B1/307
HUMAN NECESSITIES
A61B17/2256
HUMAN NECESSITIES
A61B6/5217
HUMAN NECESSITIES
A61B8/085
HUMAN NECESSITIES
G06F3/0484
PHYSICS
A61B18/26
HUMAN NECESSITIES
A61B1/0005
HUMAN NECESSITIES
A61B2018/00982
HUMAN NECESSITIES
International classification
A61B6/00
HUMAN NECESSITIES
A61B1/00
HUMAN NECESSITIES
A61B1/307
HUMAN NECESSITIES
A61B17/225
HUMAN NECESSITIES
A61B18/26
HUMAN NECESSITIES
A61B34/00
HUMAN NECESSITIES
A61B5/107
HUMAN NECESSITIES
A61B5/20
HUMAN NECESSITIES
A61B8/00
HUMAN NECESSITIES
A61B8/12
HUMAN NECESSITIES
G06F3/0484
PHYSICS
G06F3/0488
PHYSICS
Abstract
Aspects of stone identification methods and systems are described. According to one aspect, an exemplary method comprises: transmitting to a processing unit, with an imaging element mounted on a distal end of a scope, image data about a stone object inside a body cavity; generating from the image data, with the processing unit, a visual representation of the stone object and the body cavity; establishing from a user input, with the processing unit, a scale for the visual representation; determining from the visual representation, with the processing unit, a size of the stone object on the scale; comparing, with the processing unit, the size of the stone object with a predetermined maximum size to determine a removal status; and augmenting, with the processing unit, the visual representation to include an indicator responsive to the removal status. Associated systems are also described.
Claims
1-20. (canceled)
21. A method comprising: receiving, at a processor, image data of an object inside a body cavity; generating from the image data, with the processor, a representation of the object and the body cavity by transmitting at least a portion of the image data to an interface device; determining from the representation, with the processor, a size of the object on a scale for the representation, wherein determining the size of the object includes establishing, with the interface device, a first reference point and a second reference point on the representation of the object; comparing, with the processor, the size of the object with a predetermined maximum size to determine a removal status of the object by: determining a first removal status when the size of the object is greater than the predetermined maximum size; and determining a second removal status when the size of the object is less than the predetermined maximum size; and augmenting, with the processor, the representation to include an indicator responsive to the removal status.
22. The method of claim 21, wherein the interface device is a touchscreen display, and wherein the first and second reference points are established by touching the touchscreen display.
23. The method of claim 21, wherein augmenting the representation includes overlaying either a first indicator onto the representation based on the first removal status or a second indicator onto the representation based on the second removal status.
24. The method of claim 23, wherein determining the size of the object comprises: calculating, with the processor, a reference measurement between the first and second reference points; and determining from the reference measurement, with the processor, the size of the object using the scale.
25. The method of claim 21, wherein the scale is established by positioning a reference element adjacent the object and comparing a marker on the reference element to the object to determine a reference measurement.
26. The method of claim 25, wherein positioning the reference element comprises: moving an optical fiber until a distal portion of the optical fiber is located inside the body cavity; and positioning one or more indicators on the distal portion of the optical fiber adjacent the object.
27. The method of claim 21, wherein determining the size of the object comprises: obtaining from the image data, with an image analyzer, a reference measurement of the object within a first image frame included within the image data; and determining from the reference measurement, with the processor, a two-dimensional size of the object in the first image frame.
28. The method of claim 27, wherein the reference measurement includes a plurality of reference measurements, and wherein determining the size of the object comprises: determining from the plurality of reference measurements, with the processor, a cross-sectional area of the object within the first image frame.
29. The method of claim 28, wherein the image data is received from a camera mounted on a distal end of a scope, the method further comprising: moving the camera or the object to determine a depth of the object; and determining from the cross-sectional area and depth, with the processor, a volume of the object.
30. The method of claim 29, wherein the distal end of the scope includes a wave energy transducer, and determining the size of the object comprises: directing, with the processor, a wave energy from the wave energy transducer toward the object; receiving, with the transducer, a reflected portion of the wave energy; defining from the reflected portion of the wave energy, with the processor, a depth of the object; and determining from the cross-sectional area and depth, with the processor, a volume of the object.
31. The method of claim 30, wherein determining the size comprises: determining from the reflected portion of the wave energy, with the processor, a density of the object.
32. The method of claim 31, further comprising augmenting, with the processor, the representation to include an indicator responsive to at least one of the cross-sectional area of the object, the volume of the object, a surface area of the object, or the density of the object.
33. A method comprising: receiving at a processor image data of an object inside a body cavity; generating from the image data, with the processor, a representation of the object and the body cavity by transmitting at least a portion of the image data to an interface device; and determining from the representation, with the processor, a size of the object on a scale for the representation by: establishing, with the processor, the scale for the representation, wherein establishing the scale comprises positioning a reference element adjacent the object and comparing a marker on the reference element to the object to determine a reference measurement; determining from the reference measurement, with the processor, the size of the object using the scale.
34. The method of claim 33, further comprising: comparing, with the processor, the size of the object with a predetermined maximum size to determine a removal status of the object by: determining a first removal status when the size of the object is greater than the predetermined maximum size; and determining a second removal status when the size of the object is less than the predetermined maximum size.
35. The method of claim 34, further comprising: augmenting, with the processor, the representation to include one or more indicators responsive to the removal status.
36. The method of claim 34, wherein the image data is received from a camera mounted on a distal end of a scope, and wherein the object is a stone object.
37. A method comprising: positioning a distal end of a scope inside a body cavity, wherein the distal end of the scope includes a camera; receiving, at a processor, image data of an object inside the body cavity from the camera; generating from the image data, with the processor, a representation of the object and the body cavity; determining from the representation, with the processor, a size of the object on a scale for the representation, wherein determining the size of the object includes establishing, with an interface device, a first reference point and a second reference point on the representation of the object; comparing, with the processor, the size of the object with a predetermined maximum size to determine a removal status of the object by: determining a first removal status when the size of the object is greater than the predetermined maximum size; and determining a second removal status when the size of the object is less than the predetermined maximum size; and generating, with the processor, a representation to include one or more indicators responsive to the removal status by overlaying either a first indicator onto the representation based on the first removal status or a second indicator onto the representation based on the second removal status.
38. The method of claim 37, wherein the interface device is a touchscreen display, and wherein the first and second reference points are established by touching the touchscreen display.
39. The method of claim 37, wherein the distal end of the scope includes a wave energy transducer, and wherein determining the size of the object further comprises: directing, with the processor, a wave energy from the wave energy transducer toward the object; receiving, with the transducer, a reflected portion of the wave energy; defining from the reflected portion of the wave energy, with the processor, a depth of the object; and determining from the size and depth of the object, with the processor, a volume of the object.
40. The method of claim 37, further comprising: moving the camera or the object to determine a depth of the object; and determining from the size and depth of the object, with the processor, a volume of the object.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0015] The accompanying drawings are incorporated in and constitute a part of this specification. These drawings illustrate aspects of the present disclosure that, together with the written descriptions herein, serve to explain this disclosure. Each drawing depicts one or more exemplary aspects according to this disclosure, as follows:
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
DETAILED DESCRIPTION
[0022] Aspects of the present disclosure are now described with reference to exemplary stone identification methods and systems. Some aspects are described with reference to medical procedures where a scope is guided through a body until a distal end of the scope is located in a body cavity including one or more stone objects. For example, the scope may include an elongated sheath that is guided through a urethra, a bladder, and a ureter until a distal end of the sheath is located in a calyx of a kidney, adjacent one or more kidney stones. References to a particular type of procedure, such as medical; body cavity, such as a calyx; and stone object, such as a kidney stone, are provided for convenience and not intended to limit the present disclosure unless claimed. Accordingly, the concepts described herein may be utilized for any analogous device or method—medical or otherwise, kidney-specific or not.
[0023] Numerous axes are described. Each axis may be transverse, or even perpendicular, with the next so as to establish a Cartesian coordinate system with an origin point O. One axis may extend along a longitudinal axis of an element or body path. The directional terms “proximal” and “distal,” and their respective initials “P” and “D,” may be utilized to describe relative components and features in relation to these axes. Proximal refers to a position closer to the exterior of the body or a user, whereas distal refers to a position closer to the interior of the body or further away from the user. Appending the initials “P” or “D” to an element number signifies a proximal or distal location. Unless claimed, these terms are provided for convenience and not intended to limit the present disclosure to a particular location, direction, or orientation.
[0024] As used herein, the terms “comprises,” “comprising,” or like variation, are intended to cover a non-exclusive inclusion, such that a device or method that comprises a list of elements does not include only those elements, but may include other elements not expressly listed or inherent thereto. Unless stated otherwise, the term “exemplary” is used in the sense of “example” rather than “ideal.” Conversely, the terms “consists of” and “consisting of” are intended to cover an exclusive inclusion, such that a device or method that consists of a list of elements includes only those elements.
[0025] An exemplary system 100 now described with reference to
[0026] Scope 10 of
[0027] As shown in
[0028] Power and signal cord 26 is depicted in
[0029] As shown in
[0030] In
[0031] Imaging element 40 is mounted on distal end 30D of sheath 30, and operable to generate image data. Imaging element 40 may include any imaging technology. In
[0032] As shown in
[0033] An exemplary processing unit 60 is depicted in
[0034] Image analyzer 66 of
[0035] Transceiver 68 may include any wired or wireless transmission means configured to place the processing unit 60 in communication with other elements of system 100 described. In
[0036] Aspects of visual representation 80 are now described. One aspect is a method 200 for generating and/or augmenting representation 80 that, as shown in
[0037] Transmitting step 210 may include any intermediate steps required to generate and transmit image data. Step 210 may include activating components of imaging element 40, such as digital camera circuit 42 and/or light emitting diode 44. For example, step 210 may comprise: generating, with imaging element 40, a video feed of stone object 5 at a predetermined frame rate; and transmitting, with cord 26, the video feed to processing unit 60. The video may be generated manually or automatically in step 210. For example, step 210 may comprise: placing system 100 in a manual mode, wherein the video feed is generated responsive to second actuator 24; or an automatic mode, wherein the feed is generated automatically responsive to targeting criteria established within memory 64. For example, in the automatic mode, image analyzer 66 may be configured to continually scan body cavity 3 and deliver an activation signal to camera circuits 42 and/or diodes 44 whenever stone object 5 has a minimum two-dimensional size, such as a minimum stone width (e.g., 2 mm).
[0038] Additional positioning steps may be performed to generate additional image data. For example, transmitting step 210 may comprise: moving imaging element 40 to a plurality of different positions about stone object 5, and generating image data at each of the different positions. Step 210 may comprise selecting one or more image frames from the video feed. For example, step 210 may comprise selecting a first image frame including stone object 5, selecting one or more second frames including the object 4, and transmitting the first and second frames to processing unit 60 along with positional data concerning the location and orientation of the first frame relative to the second frame and/or stone object 5.
[0039] Generating step 220 may comprise any intermediate step for generating visual representation 80 from image data. An exemplary visual representation 80 is depicted in
[0040] Establishing step 230 may include automatically or manually defining a scale for visual representation 80, and calibrating system 100 according thereto. In some aspects, the manufacturer may define the scale and calibrate system 100 based upon a predetermined distance between imaging element 40 and stone object 5, at which the output of image analyzer 66 comports with the actual size of object 5. For example, the predetermined distance may be proportionate to a focal length of digital camera circuit 42, allowing the actual size to be determined when circuit 42 is focused accordingly. Because the size of stone object 5 may be small (e.g., 5 mm or less), the scale may not need to be re-defined, even if the distance between imaging element 40 and stone object 5 varies slightly (e.g., +/−10%) from the predetermined distance. The calibration of system 100 may be affected prior to use (e.g., by shipping conditions). Accordingly, step 230 may comprise utilizing a reference element (e.g., a circle of known diameter) to re-define the scale and re-calibrate system 100 ex vivo, prior to use.
[0041] To accommodate a greater range of motion with body cavity 3 and/or improve the image data, the scale of visual representation 80 also may be defined and/or re-defined in vivo, during use. For example, the diameter of fiber 36 may be known, such that step 230 comprises positioning fiber 36 adjacent stone object 5 in visual representation 80; comparing the known diameter of fiber 36 with a portion of stone object 5 to determine a reference measurement; and defining the scale based on the reference measurement. As noted above, fiber 36 may include one or more markers 37, shown in
[0042] Aspects of establishing step 230 may be performed by user 2 and/or processing unit 60. For example, user 2 may perform the comparing steps, and the defining steps may comprise inputting the scale to processing unit 60 (e.g., with one or more interface devices 12). One or more markers 37 may be shaped and spaced apart to provide a reference measurement readable by user 2 (e.g., like tick marks), and at least one marker 37 may include a computer-readable code (e.g., a QR code) that is readable by imaging element 40 to determine characteristics fiber 36, allowing for automated and/or manual determinations of scale. For example, image analyzer 66 may determine the diameter of fiber 36 from the QR code, and automatically determine the scale therefrom, as described above. Establishing step 230 may be performed once within method 200 (e.g., ex vivo, in the factory or in the operating room), or repeatedly (in vivo, during a procedure, whenever imaging element 40 is moved).
[0043] Sizing step 240 may include any intermediate steps for determining a two- or three-dimensional size of stone object 5. Numerous automated and manual aspects of sizing step 240 are now described. Manual aspects of step 240 are shown in
[0044] However determined, manually or automatically, processing unit 60 may receive the reference measurement 12M between each reference point 12A and 12B (e.g.,
[0045] A three-dimensional size of stone object 5 may be determined in sizing step 240 by tracking characteristics of stone object 5 on a frame by frame basis. For example, as noted above, transmitting step 210 may include moving imaging element 40 to generate image data at different positions relative to stone object 5. Accordingly, sizing step 240 may comprise: determining a two-dimensional size of stone object 5 at each different position, and determining a three-dimensional size of object 5 based on the two-dimensional sizes. For example, sizing step 240 may comprise: determining a first size (e.g., a cross-sectional stone area) of stone object 5 in a first image frame; determining a second size (e.g., a stone width) of stone object 5 in a second image frame arranged transversely with the first imaging frame; and determining a three-dimensional size (e.g., a stone volume) for object 5 based on the first and second sizes (e.g., as a product of the cross-sectional stone area multiplied by the stone width). Other three-dimensional sizes (e.g., a surface area) may be determined using similar techniques. An indicator of each size may be output to indicator layer 83, as before.
[0046] Sizing step 240 (or transmitting step 210) also may comprise moving stone object 5. For example, step 240 may comprise: determining a first size (e.g., a cross-sectional stone area) of stone object 5 in a first position; moving stone object 5 to a second position; determining a second size (e.g., a stone width) of stone object 5 in the second position; and determining a three-dimensional size (e.g., a stone volume) for object 5 based on the first and second sizes (e.g., as a product of the cross-sectional stone area multiplied by the stone width). The distal end 36D of optical fiber 36 may be used to move stone object 5. For example, distal end 36D may rotate stone object 5 differently in each of the first and second positions.
[0047] In some aspects, sizing step 240 includes a treatment step. For example, sizing step 240 may comprise: determining a first size (e.g., a cross-sectional stone area) of stone object 5 in a first condition; applying a treatment energy configured to place stone object 5 into a second condition; determining a second size (e.g., a stone width) of stone object 5 in the second condition; and/or determining a three-dimensional size (e.g., a stone volume) for object 5 based on the first and second sizes (e.g., as a product of the cross-sectional stone area multiplied by the stone width). The treatment energy may be laser energy that is discharged from distal end 36D of fiber 36 to break stone object 5 into a plurality of stone fragments, each of which may move (e.g., revolve) relative to the next. Aspects of sizing step 240 may be used to track and size each stone fragment frame-by-frame within the video feed.
[0048] Comparing step 250 may include any intermediate steps for determining the removal status of stone object 5. For example, step 250 may comprise: comparing, with processing unit 60, a size of stone object 5 (e.g., a maximum stone width) with a predetermined maximum size (e.g., a maximum width of working channel 34, a maximum capture width of a retrieval basket extendable therefrom, and/or a maximum width of a ureter or an access sheath). The predetermined maximum size may be relative to a maximum post-treatment width of stone object 5 so that the removal status may be utilized to determine whether object 5 need be further treated and/or removed. Once determined, an indicator of the removal status also may be output to indicator layer 83. Any number of removal statuses may be determined in this manner. For example, comparing step 250 may comprise: determining a first removal status if the size of stone object 5 is greater than the predetermined maximum width (e.g., greater than 2 mm), and determining a second removal status if the size of stone object 5 is less than said maximum width (e.g., less than 2 mm).
[0049] Augmenting step 260 may include any intermediate steps for providing visual representation 80 with at least one indicator responsive to a characteristic of stone object 5. Numerous indicators have been described herein. For example, augmenting step 260 may comprise: overlaying portions of indicator layer 83 onto portions of image data layer 81. The overlaid portions of indicator layer 83 may include any of the indicators described above. For example, as shown in
[0050] Once augmented in step 260, visual representation 80 and/or other notification technologies may be used to alert user 2 regarding the removal status of stone object 5. For example, if a plurality of stone objects 5 are depicted in visual representation 80, then augmenting step 280 may comprise highlighting, in visual representation 80 (e.g.,
[0051] Visual representation 80 may be further augmented in step 260 to include indicators responsive to the scale and/or sizes described above. For example, as shown in
[0052] As described above, imaging element 40 may be moved relative to each of the one or more stone objects 5 to enhance the imaging data. Method 200 may be further modified to leverage these movements. For example, method 200 may comprise: identifying a reference identifier for stone object 5 (an identifying step 280); associating the reference identifier with characteristics of the object 5 (an associating step 282); tracking the characteristics during a procedure (a tracking step 284); and/or further augmenting visual representation 80 responsive to the characteristics (a further augmenting step 286). The reference identifier may be a fingerprint for each stone object 5 that is determined, with image analyzer 66, based on unique physical characteristics of object 5. An exemplary fingerprint may be based upon any aspect of the two- or three-dimensional sizes described herein.
[0053] Associating step 282 may comprise: linking each reference identifier with characteristics stone object 5. The linked characteristic may include any two or three-dimensional size described herein, as well as any other information specific to stone object 5. Step 282 may be performed on a frame by frame basis whenever the fingerprint of stone object 5 is identified in step 280, even if the location of imaging element 40 is unknown. For example, identifying step 280 may be automatically performed by image analyzer 66 whenever stone object 5 is moved into view of imaging circuits 42, allowing user 2 to move imaging element 40 freely within body cavity 3.
[0054] Tracking step 284 may be used to continuously update visual representation 80. For example, tracking step 284 may comprise: identifying a first or initial size of stone object 5; determining a second or subsequent size of the object 5; and calculating a difference between the first and second sizes. If a difference of sufficient magnitude is detected (e.g., +/−5%), then further augmenting step 286 may comprise: updating indicator layer 83 to account for the difference; and updating the removal status of stone object 5.
[0055] Aspects of method 200 have been described with reference to a single stone object 5; however, as shown in
[0056] Aspects of system 100 and method 200 have also been described with reference to one or more interface devices 12. Any number of interface devices 12 may be used. For example, as shown in
[0057] Numerous means for determining a stone depth for stone object 5 have been described above, including moving imaging element 40 and/or moving object 5. Alternate means are contemplated. For example, digital camera circuit 42 may be capable of focusing upon stone object 5, allowing stone depth to be determined relative to a focal length. Camera circuit 42 may alternatively include a plurality of cameras, allowing stone depth to be determined from binocular cues. Alternatively still, imaging element 40 may include a wave energy transducer 46 configured to generate wave energy images of body cavity 3 and/or stone object 5. Any type of wave energy can be used, including light or sound. As shown in
[0058] Aspects of method 200 may be modified for use with transducer 46. For example, sizing step 240 may comprise: directing, with processing unit 60, a wave energy from the wave energy transducer 46 toward a stone object 5; receiving, with transducer 46, a reflected portion of the wave energy; defining from the reflected portion of the wave energy, with the processing unit 60, a stone depth of stone object 5 in a direction transverse with an imaging plane of the image data; and determining the stone volume of object 5 based on the depth. According to some aspects, an average stone depth of stone object 5 may be determined responsive to a wave energy shadow created by stone object 5, and then multiplied with a cross-sectional stone area from step 240 to determine the stone volume. Because of transducer 46, the average stone depth may be determined without moving imaging element 40. Similar techniques may be utilized to determine a surface area and/or density of stone object 5; or to verify any two or three-dimensional size determined without the aid of transducer 46. As before, an indicator for these sizes may be output to indicator layer 83.
[0059] As shown in
[0060] Method 200 may be modified for use with locator device 50. For example, identifying step 280 may include locating imaging element 40 from the location signal, with processing unit 60, in a body information model generated according to the exemplary systems and methods described in U.S. Provisional Patent Application No. 62/420,981, filed Nov. 11, 2016, the entirety of which is incorporated by reference herein. In this example, the reference identifier for each stone object 5 may include a reference location for each stone object 5 within the body information model, such that identifying step 280 comprises locating each stone object 5 within the body information model; and associating step 282 comprises linking each reference location with characteristics of stone object 5. Tracking step 284 may be further configured to track the location of device 50 relative to each reference location, allowing visual representation 80 to be generated and augmented responsive to the movements of imaging element 40 within body cavity 3.
[0061] While principles of the present disclosure are described herein with reference to illustrative aspects for particular applications, the disclosure is not limited thereto. Those having ordinary skill in the art and access to the teachings provided herein will recognize additional modifications, applications, aspects, and substitution of equivalents all fall in the scope of the aspects described herein. Accordingly, the present disclosure is not to be considered as limited by the foregoing description.