Method and system for optimizing distance estimation

20220343586 · 2022-10-27

    Inventors

    Cpc classification

    International classification

    Abstract

    Distance estimation is optimized in virtual or augmented reality. A distance map of a surgical instrument to a region of interest is determined, at least at the beginning and when a position of the surgical instrument has changed. A render-image is rendered based on a medical 3D image and the position of the surgical instrument, at least at the beginning and when the position of the surgical instrument has changed. At least the region of interest and those parts of the surgical instrument positioned in the volume of the render-image are shown in the render-image. Based on the distance map, at least for a predefined area of the region of interest, visible, acoustic, and/or haptic distance-information is added.

    Claims

    1. A method for optimizing distance estimation in virtual or augmented reality, the method comprising: a) providing a medical three-dimensional (3D) image of a volume comprising a segmented anatomical object within a region of interest, b) providing 3D information of a position of a surgical instrument, c) determining a distance map of the surgical instrument to the region of interest, at least at a beginning of the method and in the case the position of the surgical instrument has changed, d) rendering a render-image based on the 3D image and the information of the position of a surgical instrument, at least at the beginning of the method and in the case the position of the surgical instrument has changed, wherein at least the region of interest is shown and those parts of the surgical instrument positioned in a volume of the render-image, and wherein based on the distance map, at least for a predefined area of the region of interest, visible, acoustic, and/or haptic distance-information is added to the render-image, e) outputting the render-image and repeating at least acts b) to e).

    2. The method according to claim 1, wherein the 3D medical image of the volume comprising is provided by: select the region of interest as a predefined region of interest comprising an organ in the 3D medical image, and segmenting an anatomical structure within the region of interest.

    3. The method according to claim 1, wherein in a course of rendering the render-image, a shadow of the surgical instrument is calculated or shadows for a number of the surgical instruments are calculated.

    4. The method according to claim 3, wherein the distance-information is distance-depending tinting of shadows.

    5. The method according to claim 4, wherein different tinting is used for the shadows of different surgical instruments, and wherein tinting is only applied in the region of interest and/or on a predefined segmented organ.

    6. The method according to claim 1, wherein the distance-information is distance-depending contour lines.

    7. The method according to claim 6, wherein the distance information further comprises numerical values (NV) of distances, wherein a modulo operator is used on the distance in order to create bands and for the contour lines, these bands are compressed into anti-aliased lines.

    8. The method according to claim 1, wherein the position of more than one surgical instrument is provided and, in course of the rendering, the visible distance-information is visualized differently for different surgical instruments, wherein for the different surgical instruments different colors, textures, and/or line-styles are used.

    9. The method according to claim 1, wherein a stochastic visibility map is computed as the distance map.

    10. The method according to claim 9, wherein the stochastic visibility map is updated in the case that a view on the rendering-image has been changed, a number of artificial light-sources has been changed, and/or a position of an artificial light-source has been changed.

    11. The method according to claim 1, wherein, based on the position of a surgical instrument, an additional artificial light-source is positioned behind the surgical instrument and shining on the region of interest.

    12. The method according to claim 1, wherein in determining the distance map of the surgical instrument to the region of interest, the region of interest is recomputed according to the position of the surgical instrument.

    13. The method according to claim 12, wherein the region of interest is recomputed according to the position of the surgical instrument and a position of the segmented anatomical object in the region of interest.

    14. The method according to claim 1, wherein path tracing or shadow mapping for shadows or for ambient lighting, and/or a computation of the visibility map are restricted to the region of interest.

    15. The method according to claim 14, wherein the path tracing or shadow mapping and/or computation of the visibility map are restricted to mesh-to-mesh visibility so that only a mesh of the surgical instrument and a mesh of the segmentation is taken into account while the volume itself is ignored.

    16. A system for optimizing distance estimation in virtual or augmented reality, the system comprising: a first data interface configured to receive a medical three-dimensional (3D) image of a volume comprising a segmented anatomical object within a region of interest, and to receive 3D information of a position of a surgical instrument, a processor configured to determine a distance map of the surgical instrument to the region of interest, a renderer configured to render a render-image based on the 3D image and the 3D information of the position of the surgical instrument, wherein at least the region of interest and those parts of the surgical instrument positioned in a volume of the render-image are shown in the render-image, and wherein, based on the distance map, at least for a predefined area of the region of interest, visible, acoustic, haptic distance-information is added, a second data interface configured to output the render-image.

    17. A non-transitory computer-readable medium on which is stored programming that can be read and executed by a computer to optimize distance estimation in virtual or augmented reality, the programming including instructions to: a) determine a distance map of a surgical instrument to a region of interest, at least at a beginning of the virtual or augmented reality and in the case the position of the surgical instrument has changed, b) render a render-image based on a 3D image and a position of a surgical instrument, at least at the beginning and in the case the position of the surgical instrument has changed, wherein at least the region of interest and parts of the surgical instrument positioned in a volume of the render-image are shown, and wherein, based on the distance map, at least for a predefined area of the region of interest, visible, acoustic, and/or haptic distance-information is added, c) output the render-image, and d) repeating acts a) to c).

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0118] Other objects and features of the present invention will become apparent from the following detailed descriptions considered in conjunction with the accompanying drawings. It is to be understood, however, that the drawings are designed solely for the purposes of illustration and not as a definition of the limits of the invention.

    [0119] FIG. 1 shows an example of a system according to an embodiment.

    [0120] FIG. 2 shows a simplified CT system with a system according to an embodiment.

    [0121] FIG. 3 shows a block diagram of the process flow of a preferred method according to an embodiment.

    [0122] FIG. 4 shows a block diagram of the process flow of a preferred special method according to an embodiment.

    [0123] FIG. 5 shows an example scene with a region of interest and a surgical instrument with photorealistic shadows according to the state of the art.

    [0124] FIG. 6 shows another example scene with a region of interest and a surgical instrument with photorealistic shadows according to the state of the art.

    [0125] FIG. 7 shows another example scene with a region of interest and a surgical instrument with photorealistic shadows according to the state of the art.

    [0126] FIG. 8 shows an example scene with a region of interest and a surgical instrument with non-photorealistic tinted shadows.

    [0127] FIG. 9 shows another example scene with a region of interest and a surgical instrument with non-photorealistic tinted shadows.

    [0128] FIG. 10 shows another example scene with a region of interest and a surgical instrument with non-photorealistic tinted shadows.

    [0129] FIG. 11 shows an example scene with a surgical instrument and a region of interest with contour lines.

    [0130] FIG. 12 shows another example scene with a surgical instrument and a region of interest with contour lines.

    [0131] FIG. 13 shows another example scene with a surgical instrument and a region of interest with contour lines.

    [0132] FIG. 14 shows an example scene with a surgical instrument and a region of interest with contour lines and photorealistic shadows.

    [0133] FIG. 15 shows another example scene with a surgical instrument and a region of interest with contour lines and photorealistic shadows.

    [0134] FIG. 16 shows another example scene with a surgical instrument and a region of interest with contour lines and photorealistic shadows.

    [0135] FIG. 17 shows an example scene with a surgical instrument and a region of interest with contour lines and numerical values.

    [0136] FIG. 18 shows an example region of interest of the human body with contour lines.

    [0137] In the diagrams, like numbers refer to like objects throughout. Objects in the diagrams are not necessarily drawn to scale.

    DETAILED DESCRIPTION

    [0138] FIG. 1 shows an example of a system 6 for optimizing distance estimation in virtual or augmented reality, designed to perform the method according to an embodiment (see FIGS. 3 and 4). The system 6 includes data interface 20, a determination unit 21 and a rendering unit 22. The data interface 20 receives a medical 3D image D of a volume including a segmented anatomical object O within a region of interest R, and 3D information of the position P of a surgical instrument SI. In this example, the surgical instrument SI could be an endoscope in the patient.

    [0139] It should be noted that the scene could be a scene of an operation with a real patient and a real surgical instrument SI. However, the invention is also applicable for simulations of an operation, with a virtual avatar of a patient or images of a real patient and a virtual representation of the surgical instrument SI.

    [0140] The determination unit 21 determines a distance map M of the surgical instrument SI to the region of interest R, at least at the beginning of the method and in the case the position P of the surgical instrument SI has changed (see e.g., FIGS. 3 and 4 referring to the method).

    [0141] The rendering unit 22 renders a render-image RI based on the 3D image D and the information of the position P of a surgical instrument SI, at least at the beginning of the method and in the case the position of the surgical instrument SI has changed, wherein at least the region of interest R is shown and those parts of the surgical instrument SI positioned in the volume of the render-image RI, and wherein based on the distance map M, at least for a predefined area of the region of interest R, visible distance-information in form of tinting T or contour lines C is added.

    [0142] In this example, the data could be provided e.g., via a data network 23, e.g., PACS (Picture Archiving and Communication System) to the data interface 20 that could also be used for outputting the render-image RI.

    [0143] FIG. 2 shows a simplified computer tomography system 1 with a control device 5 including a system 6 for optimizing distance estimation in virtual or augmented reality, designed to perform the method according to an embodiment (see FIGS. 3 and 4). The computer tomography system 1 has in the usual way a scanner 2 with a gantry, in which an x-ray source 3 with a detector 4 rotates around a patient and records raw data RD that is later reconstructed to images by the control device 5.

    [0144] An object O in the patient is indicated that could e.g., be the liver (see following figures). It could also be understood as a region of interest R (see also following figures). In the object O, there is indicated a surgical instrument SI. In difference to FIG. 1, the surgical instrument is shown here in the patient to indicate that the surgical instrument could be completely inside the body (e.g., like a stent or a robot).

    [0145] It is pointed out that the exemplary embodiment according to this figure is only an example of an imaging system and the invention can also be used on theoretically any imaging system that is used in medical and non-medical environment, e.g., for medical imaging systems such as ultrasound systems or magnetic imaging systems to provide images. It should be noted that the embodiment as described can be used for surgical planning as well as in an interventional setting. In the case of an interventional setting, the typical modality that is used is either CT (as shown) or Xray.

    [0146] In this figure, only those components are shown that are essential for explaining the embodiment. In principle, such imaging systems and associated control devices are known to the person skilled in the art and therefore do not need to be explained in detail.

    [0147] The imaging system (here the CT system 1) records images that can be used for the system 6 according to an embodiment, and images of the imaging system are processed by the system 6 according to an embodiment.

    [0148] A user can interact with the computer tomography system 1 by using terminal 7 that is able to communicate with the control device 5. This terminal 7 can also be used to examine results of the system 6 according to an embodiment or to provide data for this system 6.

    [0149] The system 6 includes the following components:

    [0150] A data interface 20 designed to receive a reconstructed medical 3D image D of a volume including a segmented anatomical object O within a region of interest R, and 3D information of the position P of a surgical instrument SI.

    [0151] A determination unit 21 designed for determining a distance map M of the surgical instrument SI to the region of interest R, at least at the beginning of the method and in the case the position P of the surgical instrument SI has changed.

    [0152] A rendering unit 22, designed for rendering a render-image RI based on the reconstructed 3D image D and the information of the position P of a surgical instrument SI, at least at the beginning of the method and in the case the position of the surgical instrument SI has changed, wherein at least the region of interest R is shown and those parts of the surgical instrument SI positioned in the volume of the render-image RI, and wherein based on the distance map M, at least for a predefined area of the region of interest, visible distance-information T, C is added.

    [0153] The shown data interface 20 is in this example also designed for outputting the render-image RI.

    [0154] The components of the system preferably are software modules.

    [0155] FIG. 3 shows a block diagram of the process flow of a preferred method for optimizing distance estimation in virtual or augmented reality according to an embodiment.

    [0156] In act I, a (especially reconstructed) medical 3D image D of a patient O a volume of this patient O) is provided. This 3D image D includes a segmented anatomical object O within a region of interest R (see e.g., following figures).

    [0157] Also in act I, 3D information of the position P of a surgical instrument SI is provided.

    [0158] In act II, a distance map M of the surgical instrument SI to the region of interest R is determined. This is done at the beginning of the method and in the case the position P of the surgical instrument SI has changed. Other, where the distance map is updated may also exist.

    [0159] In act III, a render-image RI is rendered based on the 3D image D and the information of the position P of a surgical instrument SI. In this example, rendering may be performed every time anything in the scene has changed. In the course of rendering, the region of interest R is shown and those parts of the surgical instrument SI positioned in the volume of the render-image RI (see e.g., following figures). Based on the distance map M, for the region of interest, visible distance-information in form of tinting T or contour lines C is added (see following figures).

    [0160] In act IV, the render-image RI is outputted and acts I to IV are repeated.

    [0161] Thus, the method uses non-photorealistic shadow rendering methods in order to enhance the shadow rendering with additional information. One preferred approach to encode distance information into the shadow is tinting the shadow color according to the distance from the surgical instrument SI. For this the information on where the surgical instrument SI is located is used and the information where the shadow S lies. The distance of the surgical instrument SI is then encoded into a distance map M that should be updated every time the surgical instrument SI is moved. The information on whether something is shadowed by the surgical instrument SI can be performed by either computing shadow rays or by performing a lookup into a shallow stochastic visibility map VM.

    [0162] FIG. 4 shows a block diagram of the process flow of a preferred special method according to an embodiment.

    [0163] At the box “start” 30, the algorithm starts. If the process flow returns to start, it is repeated until a user stops the procedure. At the start, the image can be preprocessed. This preprocessing could be repeated anytime the routine returns to start, but it also could be performed at the beginning, only. The preprocessing here includes a segmentation of an organ O (e.g., the liver shown in the following figures) or a region of interest R, in order to only change the shadow S on the segmentation or inside the region of interest. Another possibility is to define the region of interest R using an eye tracker similarly to the idea behind foveated rendering.

    [0164] After that, it is examined 31, whether the position of the surgical instrument SI has changed. If yes, the distance map M is updated 32 and preferably (dashed box) the region of interest R is recomputed 33 according to the position of the surgical instrument SI and the segmented organ O. If the optional act is not performed, the procedure follows the dashed line. In the case, the examination is negative (no movement of the surgical instrument SI), the procedure advances without updating/recomputing.

    [0165] Then, whether the view has changed is examined 34. If yes, the stochastic visibility map VM is updated 35. If the examination 34 is negative (no change of view), the procedure advances without updating. A (shallow) stochastic visibility map VM is a two-dimensional map that indicates whether a position A is visible from a position B. This visibility map VM can be computed for example by computing a shadow map SM. Unless the shadow map SM belongs to a point light source however, the shadow map SM does not convey exact shadowing information. For area light sources, several views can be computed and averaged in order to give a probability of visibility or an approximate percentage of occlusion.

    [0166] After that, it is examined 36, whether rendering is necessary. If yes, the same loop 37 is then performed for each fragment. In the loop 37, it is examined 38 whether the fragment is in the shadow S or not. If yes, the fragment is tinted 39, if not there is no tinting. The tinting 39 can be performed according to the distance (e.g., the nearer the brighter the color). It could be distinguished whether the fragment is on the segmented organ (tinting) or not (no tinting). It could also be distinguished whether the fragment is inside the region of interest (tinting) or not (no tinting).

    [0167] Instead of tinting, also contour lines C (see FIGS. 11 to 17) or a banding could be used for distance encoding. This approach allows a more precise visualization of the distance than simple tinting. Users can also transfer their knowledge of topographic map visualization to this new visualization.

    [0168] Using a distance map M and a (shallow) stochastic visibility map VM as explained above, this method is different only at the level of the actual fragment shading. Preferably, instead of tinting 39 the shadow S with a color, a modulo operator is used on the distance in order to create bands. For the contour lines C, these bands are compressed into lines, possibly anti-aliased.

    [0169] The algorithm could easily be altered in the loop, by including contour lines C instead of tinting 39 or in addition to tinting 39. For each fragment it is examined whether it is in the shadow S or not. If yes, a modulo operator could be applied as described above, and contour lines C could be drawn according to the distance. It is preferred to show single line bands of the contour lines.

    [0170] Regarding coloring (in the course of tinting 39 or of colored contour lines C), it is preferred to disambiguate between shadows S by different instruments. In a scenario where surgery with multiple instruments is simulated or performed, it is preferred to disambiguate the depth information between the different surgical instruments SI. This can be achieved by using different approaches or colors for different surgical instruments SI. For example, one surgical instrument SI could be indicated by tinting 39 its color and another with contour lines C. However, both surgical instruments SI could also be disambiguated by tinting 39 or contour lines C having different colors. By associating different tinting colors with different surgical instruments SI, it becomes clear which surgical instrument SI is associated with which shadow S. Also, the banded or contour line C depiction of shadows S can be extended with different colors for these bands and possibly combined with transparency, especially in the umbra of several surgical instruments SI. This makes it clear which shadow S is cast by which surgical instrument SI. More importantly, this clearly displays the distances of the different surgical instruments SI without guesswork which distance to which surgical instruments SI is visualized.

    [0171] In order to disambiguate more easily and to make sure that a shadow S is visible at the correct spot, an additional artificial light source can be placed behind a surgical instrument SI, along the vector from the point to the end of the device. This ensures a shadow S in the direction where the tip of the surgical instrument SI is moving.

    [0172] In order to speed up the effects discussed above, the methods can be accelerated at the cost of slightly lower visual quality in the following ways. The more expensive path tracing for accurate shadows S as well as correct ambient lighting can be restricted to regions of interest R, whether they are defined by segmented organs O, distance maps M of surgical instruments SI or the penumbra or umbra regions provided by shadow mapping.

    [0173] Shadow computation can be restricted according to the visibility map VM or region of interest R. During rendering, for each computation we check whether the interaction point lies within the region of interest R. If it is outside, it could be reverted to less expensive shadow computation for example using shadow maps SM or possibly no shadow computation at all. Whenever a light interaction within the region of interest R is computed, more expensive shadow computation methods are preferably chosen, whether adding ambient occlusion or a higher number of shadow rays. This way, the expensive and accurate shadow computations are only performed in the regions of interest R. The same can be done for ambient lighting or ambient occlusion effects. They are only performed within a region of interest R.

    [0174] Shadow maps SM are a fast but reasonably high-quality method of computing shadows S, with methods like percentage-closer soft shadows S even producing soft shadows S. Instead of using the shadow maps SM directly for shadow computations, this method preferably uses the cheap shadow map SM lookup to determine the course of the shadow computation. The shadow map SM is then used as a first act towards determining which shadow algorithm is used where in the visualization.

    [0175] If ambient lighting is desired in order to more clearly show the regions within the shadow S, instead of using expensive ray tracing, a non-physical light source can be added under the instrument that only illuminates the region in the shadow S cast by the surgical instrument SI. This again, can be determined by a lookup into the visibility map VM or by computing shadow rays.

    [0176] In order to speed up the computation of the visibility map VM, the visibility computation can be restricted to mesh-to-mesh visibility, so that only the mesh of the instrument and the mesh of the segmentation is taken into account while the volume itself is ignored. Another option to accelerate the visibility is using ray casting instead of path tracing to generate the visibility map VM.

    [0177] FIGS. 5, 6, and 7 show a scene with a region of interest R and a surgical instrument SI with photorealistic shadows S according to the state of the art. In these figures and in the following figures, the region of interest shows a part of the liver. Thus, here the region of interest (the volume) represents the anatomical object O, i.e., the segmentation O. Here, it is shown a shape (representing a surgical instrument SI) getting closer to a segmented object O (the liver) with a shadow S. The shadow color gets stronger, the closer the shape gets.

    [0178] In FIG. 5, the surgical instrument SI is relatively far from the region of interest R and its shadow S is relatively faint.

    [0179] In FIG. 6, the surgical instrument SI is nearer to the region of interest R and its shadow S is more definitive.

    [0180] In FIG. 7, the surgical instrument SI is relatively near, nearly touching the region of interest R and its shadow S is strong.

    [0181] FIGS. 8, 9 and 10 show a scene similar to FIGS. 5 to 7 with a tinting T added to the shadows S in the region of interest R. It is shown as a shape (again representing a surgical instrument SI) getting closer to a segmented object O (again the liver) with a shadow S tinted in a color, e.g., in red. The shadow color and the tinting gets stronger, the closer the shape gets.

    [0182] FIGS. 11, 12 and 13 show a scene similar to FIGS. 5 to 7 with contour lines C instead of shadows S in the region of interest R. The contour lines C are used to indicate the distance of the surgical instrument SI to the object O. There are more dashed contour lines C when the surgical instrument SI is closer. There are two types of contour lines C, C1, one type of contour lines C (dashed) represents the distance of the surgical instrument SI, and the other type of contour line C1 (solid) represents the shape of the object O and is not aligned with the distance of the surgical instrument SI. Here contour lines C replace the shadow S.

    [0183] FIGS. 14, 15 and 16 show the scene from FIGS. 11 to 13 with additional photorealistic shadows S. In these images, the contour lines C are, thus, used in addition to the shadow S to indicate distance between the surgical instrument SI and the segmented object O.

    [0184] FIG. 17 shows a scene similar to FIG. 11 with contour lines C and numerical values N shown besides the respective contour lines C. Thus, in this image the contour lines C are enhanced with text that indicates the distance in mm.

    [0185] FIG. 18 shows a region of interest R of the human body with contour lines C rendered on the liver (the object O in the region of interest R). Such special rendering can be easily implemented for virtual reality simulators or any other sort of rendering. In the case of augmented reality views, the real-world objects would have to be stitched together from additional camera views though.

    [0186] Although the present invention has been disclosed in the form of preferred embodiments and variations thereon, it will be understood that numerous additional modifications and variations could be made thereto without departing from the scope of the invention. For the sake of clarity, it is to be understood that the use of “a” or “an” throughout this application does not exclude a plurality, and “comprising” does not exclude other acts or elements. The mention of a “unit” or a “device” does not preclude the use of more than one unit or device.