ESTIMATING A POSITION OF AN ENDOSCOPE IN A MODEL OF THE HUMAN AIRWAYS
20230233098 · 2023-07-27
Assignee
Inventors
- Andreas Härstedt JØRGENSEN (Rødovre, DK)
- Finn SONNENBORG (Frederikssund, DK)
- Dana Marie YU (Ballerup, DK)
- Lee Herluf Lund LASSEN (Måløv, DK)
- Alejandro ALONSO DÍAZ (Ballerup, DK)
- Josefine Dam GADE (Frederiksberg, DK)
Cpc classification
A61B5/066
HUMAN NECESSITIES
International classification
A61B5/06
HUMAN NECESSITIES
A61B1/00
HUMAN NECESSITIES
Abstract
Disclosed is an image processing device for estimating a position of an endoscope in a model of the human airways using a first machine learning data architecture trained to determine a set of anatomic reference positions, said image processing device comprising a processing unit operationally connectable to an image capturing device of the endoscope, wherein the processing unit is configured to obtain a stream of recorded images; continuously analyse the recorded images of the stream of recorded images using the first machine learning data architecture to determine if an anatomic reference position of a subset of anatomic reference positions, from the set of anatomic reference positions, has been reached; and where it is determined that the anatomic reference position has been reached, update the endoscope position based on the anatomic reference position, and an endoscope system comprising an endoscope and an image processing device, a display unit comprising an image processing device, and a computer program product.
Claims
1. An image processing device for estimating a position of an endoscope, said image processing device comprising: a processing unit operationally connectable to an image capturing device of the endoscope; a first machine learning data architecture trained to determine a set of anatomic reference positions; and a model of human airways, wherein the processing unit is configured to: obtain from the image capturing device of the endoscope a stream of recorded images during an endoscopic procedure; continuously analyse the recorded images of the stream of recorded images using the first machine learning data architecture to determine if the endoscope reached an anatomic reference position of a subset of anatomic reference positions from the set of anatomic reference positions, the subset comprising a plurality of anatomic reference positions; and where it is determined that the anatomic reference position has been reached, update the endoscope position based on the anatomic reference position and update the subset of anatomic reference positions.
2. (canceled)
3. (canceled)
4. The image processing device of claim 1, wherein the updated subset of anatomic reference positions comprises at least one anatomic reference positions from the subset of anatomic reference positions.
5. The image processing device of claim 1, wherein the anatomic reference position two or more lumens of a branching structure.
6. The image processing device of claim 1, further comprising a second machine learning architecture trained to detect lumens in an endoscope image, wherein the image processing device is configured to determine if two or more lumens are present in the at least one recorded image using the second machine learning architecture.
7. The image processing device of claim 5, wherein the image processing device is further configured to, where it is determined that the anatomic reference position has been reached, estimate a position of the two or more lumens in the model of the human airways.
8. The image processing device according to claim 5, wherein the image processing device is configured to, where it is determined that the anatomic reference position has been reached, estimate a position of the two or more lumens in the model of the human airways using the first machine learning architecture.
9. The image processing device of claim 7, wherein the image processing device is configured to determine whether one or more lumens are present in at least one subsequent recorded image and, where it is determined that one or more lumens are present in the at least one subsequent recorded image, determine the position of the one or more lumens in the model of the human airways based at least in part on a previously estimated position of the two or more lumens and/or a previous estimated endoscope position.
10. The image processing device of claim 7, wherein the image processing device is further configured to, in response to determining that the anatomic reference position has been reached: determine which one of the two or more lumens the endoscope enters; and update the endoscope position based on the determined one of the two or more lumens.
11. The image processing device of claim 10, wherein the image processing device is configured to determine which one of the two or more lumens the endoscope enters by analysing, in response to a determination that two or more lumens are present in the at least one recorded image, a plurality of the recorded images to determine a movement of the endoscope.
12. The image processing device of claim 10, wherein the anatomic reference position is a branching structure comprising a plurality of branches, and wherein the image processing device is further configured to: determine which branch from the plurality of branches the endoscope enters; and update the endoscope position based on the determined branch.
13. The image processing device of claim 10, wherein the processing unit is further configured to: where it is determined that the anatomic reference position has been reached, store a part of the stream of recorded images.
14. The image processing device of claim 1, wherein the processing unit is further configured to: subsequent to updating the subset of anatomic reference positions, update the model of the human airways based on the reached anatomic reference position.
15. The image processing device of claim 14, wherein the model of the human airways is a schematic model based on images from a magnetic resonance (MR) scan output and/or a computed tomography (CT) scan output.
16. The image processing device of claim 1, wherein the processing unit is further configured to: subsequent to the step of updating the endoscope position, perform a mapping of the endoscope position to the model of the human airways and display the endoscope position on a view of the model of the human airways.
17. The image processing device of claim 1, wherein the processing unit is further configured to: store at least one previous endoscope position and display on the model of the human airways the at least one previous endoscope position.
18. The image processing device of claim 1, further comprising input means for receiving a predetermined desired position in the lung tree, the processing unit being further configured to: indicate on the model of the human airways the predetermined desired position.
19. The image processing device of claim 18, wherein the processing unit is further configured to: determine a route to the predetermined desired position, the route comprising one or more predetermined desired endoscope positions, determine whether the updated endoscope position corresponds to at least one of the one or more predetermined desired endoscope positions, and where it is determined that the updated endoscope position does not correspond to at least one of the one or more predetermined desired endoscope positions, provide an indication on the model that the updated endoscope position does not correspond to at least one of the one or more predetermined desired endoscope positions.
20. The image processing device of claim 1, wherein the first machine learning data architecture is trained by: determining a plurality of anatomic reference positions of the body cavity, obtaining a training dataset for each of the plurality of anatomic reference positions based on a plurality of endoscope images, and training the first machine learning model using said training dataset.
21. An endoscope system comprising an endoscope and an image processing device according to claim 1.
22. An endoscope system according to claim 21, further comprising a display unit, wherein the display unit is operationally connectable to the image processing device, and wherein the display unit is configured to display at least a view of the model of the human airways.
23. (canceled)
24. (canceled)
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0161] The tip part assemblies and methods will now be described in greater detail based on non-limiting exemplary embodiments and with reference to the drawings, on which:
[0162]
[0163]
[0164]
[0165]
[0166]
[0167]
[0168]
[0169]
[0170]
[0171]
[0172]
[0173] Similar reference numerals are used for similar elements across the various embodiments and figures described herein.
DETAILED DESCRIPTION
[0174] Referring first to
[0175] In
[0176] The monitor 11 shown in
[0177]
[0178] In the first step 61, a stream of images is obtained from an image capture device, such as a camera unit, of an endoscope. In step 62, an image from the stream of images is analysed to determine whether an anatomic reference position has been reached. In other embodiments, a plurality of images from the stream of images may be analysed sequentially or simultaneously in step 62.
[0179] Where it is determined in step 62 that an anatomic reference position has not been reached, the processing unit returns to step 61 as indicated by decision 62a. The step 61 of obtaining images from the image capturing unit as well as the step 62 of analysing the images may be carried out simultaneously and/or may be carried out sequentially.
[0180] An anatomic reference position is a position, at which a furcation occurs. In step 62, the processing unit determines whether an anatomic reference position has been reached by determining whether a furcation is seen in an image from the obtained stream of images using a machine learning data architecture. The machine learning data architecture is trained to detect a furcation in images from an endoscope. In other embodiments, the anatomic reference positions may be other positions, potentially showing features different from or similar to that of a furcation.
[0181] Where it is determined in step 62 that an anatomic reference position has been reached, the processing unit is configured to proceed to step 63 as indicated by decision 62b. In step 63, an endoscope position is updated in a model of the human airways based on the determined anatomic reference position. This may comprise generating an endoscope position and/or removing a previous endoscope position and inserting a new endoscope position.
[0182] The endoscope position is determined in step 63 as one in a plurality of predetermined positions present in the model, based on the determination that an anatomic reference position has been reached and a previous position, e.g. previously determined by the processing unit.
[0183]
[0184] In step 70 of the flow chart shown in
[0185] In step 70, a predetermined desired position on the model is furthermore input. The predetermined desired position can, e.g., be a bronchus and/or a bronchiole.
[0186] In step 70 of the flow chart shown in
[0187] In step 71, a route from a starting point, e.g. an entry into the human airways, and/or a part of the human airways such as the trachea, to the predetermined desired position is determined throughout the model. The route may comprise a number of predetermined positions in the human airways, potentially corresponding to potential endoscope positions and/or to anatomic reference positions. In some embodiments, a plurality of predetermined desired positions may be provided, and individual routes and/or a total route may be provided.
[0188] In step 71, the determined route is furthermore shown in the view of the model displayed in step 70. The route may be shown by a marking, e.g. as illustrated in the model view of
[0189] In step 72, a stream of images is obtained from an image capture device, such as a camera unit, of an endoscope. The endoscope may be an endoscope as shown in and described with reference to
[0190] In step 73, an image from the stream of images is analysed to determine whether an anatomic reference position has been reached. In step 73 the analysis is carried out using a machine learning data architecture. In other embodiments, a plurality of images from the stream of images may be analysed sequentially or simultaneously in step 73. In step 73, either a decision 73a is taken that it is determined that an anatomic reference position has not been reached, or a decision 73b is taken that it is determined that an anatomic reference position has been reached.
[0191] Where decision 73a is taken, the processing unit is configured to return to step 72, in which a stream of images is obtained from an endoscope, i.e. from an image capture unit of an endoscope. Step 72 and step 73 may be performed simultaneously or sequentially, and a stream of images may be obtained whilst the processing unit is determining whether an anatomic reference position has been reached. Steps 72, 73 and 73a corresponds to steps 61, 62, and 62a, respectively, of the flow chart shown in
[0192] Where decision 73b is taken, the processing unit goes to step 74, corresponding to step 63 of the flow chart shown in
[0193] In step 75, the updated endoscope position is shown in the view of the model generated in step 71. The updated endoscope position is shown by a marker arranged at a position in the model corresponding to the updated endoscope position. The updated endoscope position replaces the previous endoscope position in the model. Alternatively, one or more previous positions may remain shown on the view of the model, potentially indicated such that the updated position is visually distinguishable from the previous position(s). For instance, markers indicating a previous endoscope position may be altered to be of a different type or colour than the marker indicating an updated endoscope position.
[0194] In step 75, the updated position may furthermore be stored. The updated position is stored in a local non-transitory storage of the image processing device. The updated position may alternatively or subsequently be transmitted to an external non-transitory storage.
[0195] A number of images from the stream of images, in which an anatomic reference position was detected in step 73, may furthermore be stored in step 75 and the image(s) from the stream of images. The images may be stored with a reference to the updated reference position in local non-transitory storage and/or in external non-transitory storage. The stored images may furthermore be used by the machine learning data architecture, e.g. to improve the detection of anatomic reference positions. For example, one or more of the stored image(s) and/or the reached anatomic reference position may be introduced into a dataset of the machine learning data architecture.
[0196] In step 76, the processing unit determines whether the updated endoscope position is on the route determined in step 70 by determining whether the updated endoscope position corresponds to one of the predetermined positions in the human airways included in the route. In step 76, two decisions may be taken, where one decision 76a is that the updated endoscope position is on the route, and the other decision 76b is that the updated endoscope position is not on the determined route.
[0197] Where decision 76a is taken, the processing unit returns to step 72.
[0198] Where decision 76b is taken, the processing unit proceeds to step 77, in which an indication that the updated position is not on the route determined in step 71, is provided to a user, i.e. medical personnel. The indication may be a visual indication on a display unit and/or on the view of the model, and/or may be an auditory cue, such as a sound played back to the user, or the like.
[0199] Subsequent to providing the indication in step 77, the processing unit returns to step 71 and determines a new route to the predetermined desired position from the updated endoscope position.
[0200] It should be noted that it will be understood that steps 72 and 73 may run in parallel with steps 71, 74-77 and/or that decision 73b may interrupt steps 74-77 and 71.
[0201]
[0202] The view of the schematic model may be generated in step 70 of the flow chart of
[0203] In the view of
[0204] The view of the model shown in
[0205]
[0206] Similar to the view shown in
[0207] In the view of the schematic model shown in
[0208] In the view of the schematic model shown in
[0209] In the view of
[0210]
[0211] The endoscope 90 has an image capturing device 91 and the processing unit of the image processing device 92 is operationally connectable to the image capturing device of the endoscope 91. In this embodiment, the image processing device 92 is integrated in a display unit 93. In this embodiment, the image processing device 92 is configured to estimate a position of the endoscope 90 in a model of the human airways using a machine learning data architecture trained to determine a set of anatomic reference positions, said image processing device comprising a processing unit operationally connectable to an image capturing device of the endoscope. In this embodiment, the processing unit of the image processing device 92 is configured to:
[0212] obtain a stream of recorded images;
[0213] continuously analyse the recorded images of the stream of recorded images using the machine learning data architecture to determine if an anatomic reference position of a subset of anatomic reference positions, from the set of anatomic reference positions, has been reached; and
[0214] where it is determined that the anatomic reference position has been reached, update the endoscope position based on the anatomic reference position.
[0215]
[0216] The image 100 shows a branching, i.e. a bifurcation 101, of the trachea into a left primary bronchus 102 and a right primary bronchus 103. The bifurcation 101 is a predetermined anatomic reference position, and the image processing device determines based on the image 100 that the bifurcation 101 has been reached and updates the position of the endoscope in the model of the human airways (not shown in
[0217] In the image 100, the image processing device determines the two branches of the bifurcation 101 as the left main bronchus 102 and the right main bronchus 103 using the machine learning data architecture of the image processing device. The image processing device provides a first overlay 104 on the image 100 indicating to the operator, e.g. the medical personnel, the left main bronchus 102 and a second overlay 105 indicating to the operator the right main bronchus 103. The first 104 and second overlays 105 are provided on the screen in response to the user pushing a button (not shown). The first 104 and second overlays 105 may be removed by pushing the button again.
[0218] Where the operator navigates the endoscope into either the left main bronchus 102 or the right main bronchus 103, it is determined by the image processing device which of left 102 and right main bronchus 103, the endoscope has entered. The image processing device updates the estimated endoscope position based on the determined one of the left 102 or right main bronchus 103, the endoscope has entered. When a subsequent branching is encountered, the image processing device determines the location of the branching in the model of the human airways based on the information regarding which of the main bronchi 102, 103, the endoscope has entered.
[0219]
[0220] In step 200, the image processing device obtains a first image from a stream of images from an image capturing device of an endoscope. In other embodiments, the step may comprise obtaining a first plurality of images.
[0221] In step 202, the image processing device analyses the first image to identify and detect any lumen in the first image. In step 202, the image processing device moreover determines and locates, in the image, a centre point of any identified lumen. Moreover, the image processing device in step 202 determines an extent of each of the lumens in the first image by determining a bounding box for each identified lumen. In step 202, the processing unit uses a second machine learning data architecture trained to detect a lumen.
[0222] In step 204, the processing unit determines if two or more lumens are present in the first image. If it is not determined that there are two or more lumens present in the first image, the processing unit returns 204a to step 200 of obtaining a first image again.
[0223] If, on the other hand it is determined in step 204 that two or more lumens are present in the first image, the processing unit continues 204b to step 206. In step 206, the processing unit identifies and estimates a position of the two or more lumens in the model of the human airways. In step 206, the processing unit uses a first machine learning architecture to identify and estimate a position of the two or more lumens in the model of the human airways.
[0224] In step 208, the processing unit obtains a second image from the stream of images.
[0225] In step 210, the image processing device analyses the second image to identify and detect any lumens in the second image using the second machine learning data architecture. The image processing device carries out this step similar to step 202, however for the second image rather than the first image.
[0226] If one or more lumens are detected in the second image, the processing unit in step 212 determines a position in the model of the human airways of the one or more lumens in the second image based at least in part on the identification and estimated positions of the two or more lumen in step 206.
[0227] In step 214, the processing unit determines if only one lumen is present in the second image. If two or more lumens are present in the second image, the processing unit stores the classification, i.e. identification and position determination, made in step 212 and returns 214a to step 208 of obtaining another second image. The classification made in step 212 may then be used in a later classification when the processing unit reaches step 212 again.
[0228] The classification made in step 212 further based on the bounding boxes and centre points determined in steps 202 and 210.
[0229] If, in step 214, it is determined that only one lumen is present in the second image, the processing unit proceeds 214b to step 216, in which the endoscope position is updated. In step 216, the endoscope position is updated to an anatomic reference position corresponding to the position of the one lumen in the model of the human airways. Thereby, in step 216, the processing unit determines that the endoscope has entered the one lumen.
[0230]
[0231] The image 110 shows two lumens 112a, 112b of a branching, i.e. a bifurcation 111, of the right main bronchus into a first secondary right bronchus having lumen 112a and a second secondary right bronchus having lumen 112b.
[0232] The bifurcation 111 is a predetermined anatomic reference position.
[0233] In the image 110, the image processing device identifies the first 112a and second lumens 112b. The image processing device further determines a centre point 113a of the first lumen 112a and a centre point 113b of the second lumen 112b. The image processing device moreover visually indicates a relative size on the image by indicating a circumscribed circle 114a of the first lumen 112a and a circumscribed circle 114b of the second lumen 112b. As seen in
[0234]
[0235] In the image 110′, the image processing device identifies the first 112a and second lumens 112b. In the image 110′, the image processing device determines a bounding box 115a of the first lumen 112a and a bounding box 115b of the second lumen 112b. The image processing device moreover estimates a position of the first lumen 112a as a right secondary bronchus, branch 3 (RB3) and a position of the second lumen 112b as a right second bronchus, branch 2 (RB2). The image processing device indicates this with a text overlay 116a indicating the estimated position of the first lumen 112a and a text overlay 116b indicating the estimated position of the second lumen 112b. When it is determined that the endoscope enters the first lumen 112a, the endoscope position is updated to correspond to RB3, or when it is determined that the endoscope enters the second lumen 112b, the endoscope position is updated to correspond to RB2.
[0236] Although some embodiments have been described and shown in detail, the invention is not restricted to them, but may also be embodied in other ways within the scope of the subject matter defined in the following claims. In particular, it is to be understood that other embodiments may be utilised and structural and functional modifications may be made without departing from the scope of the present invention.
[0237] In device claims enumerating several means, several of these means can be embodied by one and the same item of hardware. The mere fact that certain measures are recited in mutually different dependent claims or described in different embodiments does not indicate that a combination of these measures cannot be used to advantage.
[0238] It should be emphasized that the term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.