Manual calibration of imaging system
10942022 ยท 2021-03-09
Assignee
Inventors
Cpc classification
G01B9/02091
PHYSICS
G06T7/80
PHYSICS
A61B5/7475
HUMAN NECESSITIES
A61B2560/0223
HUMAN NECESSITIES
A61B5/0073
HUMAN NECESSITIES
G06T2207/10101
PHYSICS
G06T2207/20101
PHYSICS
International classification
A61B5/00
HUMAN NECESSITIES
G06T7/80
PHYSICS
Abstract
The invention generally relates to methods for manually calibrating imaging systems such as optical coherence tomography systems. In certain aspects, an imaging system displays an image showing a target and a reference item. A user looks at the image and indicates a point within the image near the reference item. A processor detects an actual location of the reference item within an area around the indicated point. The processor can use an expected location of the reference item with the detected actual location to calculate a calibration value and provide a calibrated image. In this way, a user can identify the actual location of the reference point and a processing algorithm can give precision to the actual location.
Claims
1. A method of calibrating an imaging system, the method comprising: obtaining, by an imaging device of the imaging system, an image showing a target and a reference item, wherein the reference item comprises a structure of the imaging device disposed within the target; displaying the image showing the target and the reference item; receiving a user input indicating a point on or near the reference item, wherein the reference item and the point are located within the target on the image; detecting, via a morphological imaging processing operation, a portion of the reference item within an area around the indicated point, wherein the morphological imaging processing operation is selected from the group consisting of erosion, dilation, opening, and closing, and a combination thereof; in response to detecting the portion of the reference item: detecting, via the morphological imaging processing operation, a remaining portion of the reference item using an expanded area around the indicated point; and determining a radial distance of the reference item from the imaging device; calculating a calibration value based on the radial distance of the reference item; and providing, based on the calibration value, a calibrated image of the target at a known scale.
2. The method of claim 1, wherein the imaging system is an optical coherence tomography system, and wherein providing the calibrated image comprises changing a reference path length of the imaging system to match a sample path length according to the calibration value.
3. The method of claim 1, wherein the reference item comprises a catheter sheath.
4. The method of claim 1, further comprising moving a component of the imaging system based on the calculated calibration value and performing a scan to provide the calibrated image.
5. The method of claim 1, further comprising digitally transforming image data to provide the calibrated image.
6. The method of claim 1, wherein the user input is a single mouse click or a single touch of a touchscreen.
7. The method of claim 1, wherein the area comprises a predetermined polygon or circle around the indicated point.
8. The method of claim 7, wherein the area is within the expanded area.
9. The method of claim 1, wherein the calibration value represents a z-offset associated with an interferometric device.
10. The method of claim 1, wherein the imaging device comprises a transducer.
11. An imaging system comprising: a processor and a computer-readable storage medium having instructions therein which, when executed by the processor, cause the system to: receive, from an imaging device of the imaging system, an image showing a target and a reference item, wherein the reference item comprises a structure of the imaging device disposed within the target; display the image showing the target and the reference item; receive a user input indicating a point on or near the reference item, wherein the reference item and the point are located within the target on the image; detect, via a morphological image processing operation, a portion of the reference item within an area around the indicated point, wherein the morphological imaging processing operation is selected from the group consisting of erosion, dilation, opening, and closing, and a combination thereof; in response to detecting the portion of the reference item: detect, via the morphological imaging processing operation, a remaining portion of the reference item using an expanded area around the indicated point; and determine a radial distance of the reference item from the imaging device; calculate a calibration value based on the radial distance of the reference item; and provide, based on the calibration value, a calibrated image of the target at a known scale.
12. The system of claim 11, wherein the imaging system is an optical coherence tomography system, and wherein the computer-readable storage medium further has instructions therein which, when executed by the processor, cause the system to change a reference path length of the imaging system to match a sample path length according to the calibration value.
13. The system of claim 11, wherein the reference item comprises a catheter sheath.
14. The system of claim 12, further comprising a motor to move a component of the imaging system based on the calculated calibration value, wherein moving the component alters the reference path length.
15. The system of claim 11, further wherein the system further digitally transforms image data to provide the calibrated image.
16. The system of claim 11, wherein the user input is a single mouse click or a single touch of a touchscreen.
17. The system of claim 11, wherein the area comprises a predetermined polygon or circle around the indicated point.
18. The system of claim 11, wherein the computer-readable storage medium has stored therein an expected location of the reference item.
19. The system of claim 11, wherein the calibration value represents a z-offset associated with an interferometric device.
20. The system of claim 11, further comprising the imaging device, wherein the imaging device comprises a transducer.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
DETAILED DESCRIPTION
(25) The invention provides systems and methods for calibrating an imaging system. Systems and methods of the invention have application in imaging systems that require calibration to provide a scale. Exemplary systems include imaging and sensing systems based on principles of time-of-flight or coherent interference. In some embodiments, systems and applications contemplated for use with the invention include optical coherence tomography (OCT), time-of-flight cameras such as the CamCube 3.0 TOF camera sold under the trademark PDM[VISION] by PMDTechnologies GmbH (Siegen, Germany), or time-of-flight positron emission tomography (PET) technologies. See, e.g., Placht et al., 2012, Fast time-of-flight camera based surface registration for radiotherapy patient positioning, Med Phys 39:4-17; Karp et al., 2009, The benefit of time-of-flight in PET imaging, J Nucl Med 49:462-470. Other imaging systems for use with the invention include, for example, gated viewing, radiotherapy, intra-vascular ultrasound, magnetic resonance imaging, elastographic techniques such as magnetic resonance elastography or transient elastography systems such as FibroScan by Echosens (Paris, France), and electrical impedance tomography, as well as other applications in computer vision, robotics, art restoration, laser speed enforcement, and vision aids with security and military applications.
(26) In OCT systems, a light source is used to provide a beam of coherent light. The light source can include an optical gain medium (e.g., laser or optical amplifier) to produce coherent light by stimulated emission. In some embodiments, the gain medium is provided by a semiconductor optical amplifier. A light source may further include other components, such as a tunable filter that allows a user to select a wavelength of light to be amplified. Wavelengths commonly used in medical applications include near-infrared light, for example between about 800 nm and about 1700 nm.
(27) Generally, there are two types of OCT systems, common beam path systems and differential beam path systems, that differ from each other based upon the optical layout of the systems. A common beam path system sends all produced light through a single optical fiber to generate a reference signal and a sample signal whereas a differential beam path system splits the produced light such that a portion of the light is directed to the sample and the other portion is directed to a reference surface. Common beam path systems are further described for example in U.S. Pat. Nos. 7,999,938; 7,995,210; and 7,787,127 the contents of each of which are incorporated by reference herein in their entirety.
(28) In a differential beam path system, the coherent light from the light source is input into an interferometer and split into a reference path and a sample path. The sample path is directed to the target and used to image the target. Reflections from the sample path are joined with the reference path and the combination of the reference-path light and the sample-path light produces interference patterns in the resulting light. The light, and thus the patterns, are converted to electric signals, which are then analyzed to produce depth-resolved images of the target tissue on a micron scale. Exemplary differential beam path interferometers are Mach-Zehnder interferometers and Michelson interferometers. Differential beam path interferometers are further described for example in U.S. Pat. Nos. 7,783,337; 6,134,003; and 6,421,164, the contents of each of which are incorporated by reference herein in its entirety.
(29) Commercially available OCT systems are employed in diverse applications, including art conservation and diagnostic medicine, notably in ophthalmology where OCT can be used to obtain detailed images from within the retina. The detailed images of the retina allow one to identify diseases and trauma of the eye. Other applications of imaging systems of the invention include, for example, dermatology (e.g., to image subsurface structural and blood flow formations), dentistry (to image teeth and gum line), gastroenterology (e.g., to image the gastrointestinal tract to detect polyps and inflammation), and cancer diagnostics (for example, to discriminate between malignant and normal tissue).
(30) In certain embodiments, systems and methods of the invention image within a lumen of tissue. Various lumen of biological structures may be imaged including, for example, blood vessels, including, but not limited, to vasculature of the lymphatic and nervous systems, various structures of the gastrointestinal tract including lumen of the small intestine, large intestine, stomach, esophagus, colon, pancreatic duct, bile duct, hepatic duct, lumen of the reproductive tract including the vas deferens, vagina, uterus and fallopian tubes, structures of the urinary tract including urinary collecting ducts, renal tubules, ureter, and bladder, and structures of the head and neck and pulmonary system including sinuses, parotid, trachea, bronchi, and lungs. Systems and methods of the invention have particular applicability in imaging veins and arteries such as, for example, the arteries of the heart. Since an OCT system can be calibrated to provide scale information, intravascular OCT imaging of the coronary arteries can reveal plaque build-up over time, change in dimensions of features, and progress of thrombotic elements. The accumulation of plaque within the artery wall over decades is the setup for vulnerable plaque which, in turn, leads to heart attack and stenosis (narrowing) of the artery. OCT images, if scaled or calibrated, are useful in determining both plaque volume within the wall of the artery and/or the degree of stenosis of the artery lumen. Intravascular OCT can also be used to assess the effects of treatments of stenosis such as with hydraulic angioplasty expansion of the artery, with or without stents, and the results of medical therapy over time.
(31)
(32)
(33) As shown in
(34)
(35)
(36)
(37) Light source 827, as discussed above, may use a laser or an optical amplifier as a source of coherent light. Coherent light is transmitted to interferometer 831.
(38)
(39) An image is captured by introducing imaging catheter 826 into a target within a patient, such as a lumen of a blood vessel. This can be accomplished by using standard interventional techniques and tools such as a guide wire, guide catheter, or angiography system. Suitable imaging catheters and their use are discussed in U.S. Pat. Nos. 8,116,605 and 7,711,413, the contents of which are incorporated by reference in their entirety for all purposes.
(40)
(41)
(42)
(43)
(44) Looking back at
(45) After combining light from the sample, and reference paths, the combined light from splitter 919 is split into orthogonal polarization states, resulting in RF-band polarization-diverse temporal interference fringe signals. The interference fringe signals are converted to photocurrents using PIN photodiodes 929a, 929b, . . . on the OCB 851 as shown in
(46) Data is collected from A-scans A.sub.11, A.sub.12, . . . , A.sub.N, as shown in
(47)
(48)
(49)
(50)
(51)
(52)
(53)
(54) In some embodiments, an OCT system is operated with interchangeable, replaceable, or single-use catheters. Each catheter 826 may provide a different length to sample path 913. For example, catheters may be used that are designed to be of different lengths, like-manufactured catheters may be subject to imperfect manufacturing tolerances, or catheters may stretch during use. However, to provide a calibrated or scaled image, the z-offset must be known (for post-imaging processing) or set to zero. A z-offset can be known directly (e.g., numerically) or can be known by reviewing an image and determining an apparent difference in an actual location of an element within the image and an expected location of the element within the image.
(55) In some embodiments, the z-offset is calibrated by inspecting an image being captured while they system is running in live mode, and adjusting the actual length of reference path 915 to match the length of sample path 913.
(56) VDL 925 on reference path 915 uses an adjustable fiber coil to match the length of reference path 915 to the length of sample path 913. The length of reference path 915 is adjusted by a stepper motor translating a mirror on a translation stage under the control of firmware or software. The free-space optical beam on the inside of the VDL 925 experiences more delay as the mirror moves away from the fixed input/output fiber. As VDL 925 is adjusted, a length of reference path 915 is known (based, for example, on manufactured specifications of the system).
(57) In some embodiments, the known length of reference path 915 is used to display a calibration mark on a display. If the calibration mark is displayed at a position corresponding to a distal point on reference path 915, and if sample path 913 is the same length as reference path 915 (e.g., when z-offset is zero), it may be expected that a ring in a tomographic view that represents an outer surface of a catheter sheath will lie along the calibration mark.
(58) When a display includes a calibration mark and a ring-like element representing an outer surface of the catheter sheath separated from one another, an operator has a visual indication that the display is not calibrated.
(59)
(60) As shown in
(61) In some embodiments, z-offset calibration involves precisely determining the position of ring 211 (or lines 217) in display 237 so that the system can calculate a z-offset based on a known position of calibration mark 215. Systems of the invention can determine the position of ring 211 or any other calibration element based on user input and an image processing operation. Any suitable user input can be used. In some embodiments discussed below, user input is a click and drag operation to move ring 211 to a calibration mark. In certain embodiments, user input is accepted in the form of a single click, a single touch of a touch screen, or some other simple gesture.
(62)
(63) The system can additionally use a processor to perform an image processing operation to detect sheath 211. In some embodiments, user input indicates a single point 221 on the screen. The system then defines an area around point 221.
(64)
(65) The system searches for the sheath within area 227 by performing a processing operation on the corresponding data. The processing operation can be any suitable search algorithm known in the art.
(66) In some embodiments, a morphological image processing operation is used. Morphological image processing includes operations such as erosion, dilation, opening, and closing, as well as combination thereof. In some embodiments, these operations involve converting the image data to binary data giving each pixel a binary value. With pixels within area 227 converted to binary, each pixel of catheter sheath 211 will be black, and the background pixels will predominantly be white. In erosion, every pixel that is touching background is changed into a background pixel. In dilation, every background pixel that is adjacent to the non-background object pixels is changed into an object pixel. Opening is an erosion followed by a dilation, and closing is a dilation followed by an erosion. Morphological image processing is discussed in Smith, The Scientist and Engineer's Guide to Digital Signal Processing, 1997, California Technical Publishing, San Diego, Calif., pp. 436-442.
(67) If sheath 211 is not found within area 227, area 227 can be increased and the increased area can be searched. This strategy can exploit the statistical properties of signal-to-noise ratio (SNR) by which the ability to detect an object is proportional to the square root of its area. See Smith, Ibid., pp. 432-436.
(68) With continued reference to
(69) While described above as detecting a reference item (e.g., catheter sheath 211) by receiving user input followed by using a processor to detect a location of the sheath, the steps can be performed in other orders. For example, the system can apply morphological processing operations to an entire image and detect every element, or every element that satisfies a certain quality criterion. Then the system can receive user input that indicates a point within an image and the user can then choose the pre-detected element that is closest to that point within the image. Similarly, the steps can be performed simultaneously.
(70) Using the methodologies herein, systems of the invention can detect an element within an image of an imaging system, such as an OCT system, with great precision, based on human input that need not be precise and computer processing that need not on its own be accurate. Based on this detection, an actual location of a catheter sheath is determined and thus a precise z-coordinate Z.sub.s for the catheter sheath (e.g., within a B-scan) is known. Where an expected z-coordinate Z.sub.c for the catheter sheath is known, based on information provided extrinsically, the z-offset, and thus a calibration value, can be determined. For example, in
(71) In some embodiments, the system calculates or uses the mean, median, or root-mean-squared distance of the sheath from the calibration mark to compute the calibration value. This may be advantageous in the event of interfering speckle noise, rough or acylindrical sheaths, non-uniform catheter rotation (NURD), angular displacement of a transducer within the sheath, off-center positioning of the transducer within the sheath, or a combination thereof. In certain embodiments, only a subset of the detected points are used, for example, for efficiency or performance optimization.
(72)
(73) It will be appreciated that the foregoing description is applicable in live mode or review mode. If the imaging system is operating in live mode, capturing an image of tissue, the calibration can be put into effect either by changing the length of reference path 915 so that z-offset is zero or by transforming the dataset or on-screen image. The length of reference path 915 can be changed through the operation of the motor in the VDL. The distance Z.sub.c-Z.sub.s is converted into millimeters and the a command is sent to move the VDL to a new position.
(74) If the dataset is to be transformed, either in live mode or while the system is operating in review mode, the system is digitally shifted, stretched, or a combination thereof.
(75) In another aspect, the invention provides a method for calibrating an imaging system based on receipt of user input that indicates a motion, such as a click-and-drag operation on a computer screen.
(76)
(77) This method allows a user to manually calibrate or apply any offset using a drag-and-drop operation on the tomographic view or on the ILD. While dragging, the distance between the grab point and current point represented by the tip of the mouse pointer (or analogous finger-touch point in touchscreens) may be continuously calculated. In live mode, the image may be shifted digitally or by moving the VDL and in review mode the image is transformed digitally, as discussed above.
(78)
(79) While discussed above using a surface of a catheter sheath as a reference item which is used as a basis for calibration, other reference items are suitable. For example, any item that can be depicted such that its expected location and actual location can be compared in a display of an imaging system may be used. In some embodiments, a fiducial marker or calibration bar is introduced into the imaging target having a known dimension (e.g., 1 nm, 1 mm, 1 cm). The system operates to display a scale or a grid based on an expected appearance of the known dimension. The user then gives input indicating a point in the display near the reference item and the system also detects a location of the reference item in an area around the indicated point. Based on the expected and actual locations or dimensions of the reference item, a calibration value is calculated and a calibrated image is provided. User input, displays, and methods of receiving user input and performing calculations may be provided by one or more computers.
(80) In certain embodiments, display 237 is rendered within a computer operating system environment, such as Windows, Mac OS, or Linux or within a display or GUI of a specialized system. Display 237 can include any standard controls associated with a display (e.g., within a windowing environment) including minimize and close buttons, scroll bars, menus, and window resizing controls. Elements of display 237 can be provided by an operating system, windows environment, application programing interface (API), web browser, program, or combination thereof (for example, in some embodiments a computer includes an operating system in which an independent program such as a web browser runs and the independent program supplies one or more of an API to render elements of a GUI). Display 237 can further include any controls or information related to viewing images (e.g., zoom, color controls, brightness/contrast) or handling files comprising three-dimensional image data (e.g., open, save, close, select, cut, delete, etc.). Further, display 237 can include controls (e.g., buttons, sliders, tabs, switches) related to operating a three dimensional image capture system (e.g., go, stop, pause, power up, power down).
(81) In certain embodiments, display 237 includes controls related to three dimensional imaging systems that are operable with different imaging modalities. For example, display 237 may include start, stop, zoom, save, etc., buttons, and be rendered by a computer program that interoperates with OCT or IVUS modalities. Thus display 237 can display an image derived from a three-dimensional data set with or without regard to the imaging mode of the system.
(82)
(83) A computer generally includes a processor for executing instructions and one or more memory devices for storing instructions, data, or both. Processors suitable for the execution of methods and operations described herein include, by way of example, both general and special purpose microprocessors (e.g., an Intel chip, an AMD chip, an FPGA). Generally, a processor will receive instructions or data from read-only memory, random access memory, or both. Generally, a computer will also include, or be operatively coupled, one or more mass storage devices for storing data that represent target such as bodily tissue. Any suitable computer-readable storage device may be used such as, for example, solid-state, magnetic, magneto-optical disks, or optical disks. Information carriers suitable for embodying computer program instructions and data include all forms of non-volatile memory, particularly tangible, non-transitory memory including by way of example semiconductor memory devices, (e.g., EPROM, EEPROM, NAND-based flash memory, solid state drive (SSD), and other flash memory devices); magnetic disks, (e.g., internal hard disks or removable disks); magneto-optical disks; and optical disks (e.g., CD and DVD disks).
INCORPORATION BY REFERENCE
(84) References and citations to other documents, such as patents, patent applications, patent publications, journals, books, papers, web contents, have been made throughout this disclosure. All such documents are hereby incorporated herein by reference in their entirety for all purposes.
EQUIVALENTS
(85) Various modifications of the invention and many further embodiments thereof, in addition to those shown and described herein, will become apparent to those skilled in the art from the full contents of this document, including references to the scientific and patent literature cited herein. The subject matter herein contains important information, exemplification and guidance that can be adapted to the practice of this invention in its various embodiments and equivalents thereof.