Volume rendering using surface guided cropping
11631211 · 2023-04-18
Assignee
Inventors
Cpc classification
A61C13/0004
HUMAN NECESSITIES
A61B6/5235
HUMAN NECESSITIES
A61B6/5247
HUMAN NECESSITIES
International classification
A61C9/00
HUMAN NECESSITIES
Abstract
Disclosed is surface guided cropping in volume rendering of 3D volumetric data from intervening anatomical structures in the patient's body. A digital 3D representation expressing the topography of a first anatomical structure is used to define a clipping surface or a bounding volume which then is used in the volume rendering to exclude data from an intervening structure when generating a 2D projection of the first anatomical structure.
Claims
1. A method for selective volume rendering of 3D volumetric data from a patient, the method comprising: obtaining a 3D volumetric data set comprising data for a first and a second anatomical structure in the patient's body, wherein an occlusal surface of the first anatomical structure has a topography surface; obtaining a first digital 3D representation comprising a first portion of a surface scan of the topography surface of the occlusal surface of the first anatomical structure; defining a bounding box having a first clipping surface and a second clipping surface, the bounding box defining a portion of the 3D volumetric data set to be used in volume rendering; replacing the first clipping surface at least partly with the topography surface of the occlusal surface of the first anatomical structure, wherein the topography surface that replaced the first clipping surface is generated only from the surface scan of the first digital 3D representation; and generating a 2D projection of the first anatomical structure by volume rendering of the set of 3D volumetric data defined by the bounding box, where the first clipping surface is applied to exclude 3D volumetric data relating to the second anatomical structure, wherein the 2D projection is generated only from the 3D volumetric data set.
2. The method according to claim 1, wherein the first anatomical structure comprises dental structures in a first one of the patient's jaws and the second anatomical structure comprises dental structures in the opposing second one of the patient's jaws.
3. The method according to claim 2, wherein the first digital 3D representation expresses the topography of one or more teeth in the first one of the patient's jaws.
4. The method according to claim 1, wherein the anatomical structures comprise a jaw bone or at least part of teeth of the first one of the anatomical structures.
5. The method according to claim 1, wherein the method comprises creating a bounding volume arranged to enclose the 3D volumetric data included in the volume rendering, where at least part of one surface of the bounding volume is formed by the first clipping surface.
6. The method according to claim 1, wherein the volume rendering at least partially is based on ray tracing.
7. The method according to claim 1, wherein an offset is provided between the first clipping surface and the 3D volumetric data such that the first clipping surface is displaced away from the first anatomical structure.
8. The method according to claim 1, wherein the first anatomical structure comprises a plurality of dental structures in a first one of the patient's jaws and the second anatomical structure comprises a plurality of dental structures in the opposing second one of the patient's jaws.
9. The method according to claim 1, wherein an offset is provided between the first clipping surface and the 3D volumetric data such that the first clipping surface is displaced away from teeth in the first anatomical structure.
10. The method according to claim 1, wherein the 3D volumetric data set is obtained from a first data set and the first digital 3D representation is obtained from a second data set.
11. The method of claim 1, further comprising defining a first clipping surface at least partly from the first portion of the surface scan of the topography surface of the occlusal surface of the first anatomical structure.
12. The method of claim 11, further comprising orienting and resizing the defined first clipping surface to have a same scale and orientation as the 3D volumetric data set.
13. A method for selective volume rendering of 3D volumetric data from a patient, the method comprising: obtaining a 3D volumetric data set comprising data for a first and a second anatomical structure in the patient's body, wherein an occlusal surface of the first anatomical structure includes a topography surface; subsequently obtaining a first digital 3D representation comprising a first portion of a surface scan of the topography surface of the occlusal surface of the first anatomical structure; defining a bounding volume using at least the topography surface of the occlusal surface of the first digital 3D representation, wherein the topography surface is generated only from the surface scan of the first digital 3D representation; and generating a 2D projection of the first anatomical structure by volume rendering of the set of 3D volumetric data, where the bounding volume is applied to exclude 3D volumetric data relating to the second anatomical structure, wherein the 2D projection is generated only from the 3D volumetric data set.
14. The method according to claim 13, wherein the first anatomical structure comprises a plurality of dental structures in a first one of the patient's jaws and the second anatomical structure comprises a plurality of dental structures in the opposing second one of the patient's jaws.
15. The method according to claim 13, wherein the 3D volumetric data set is obtained from a first data set and the first digital 3D representation is obtained from a second data set.
16. The method of claim 13, further comprising defining a first clipping surface at least partly from the first portion of the surface scan of the topography surface of the occlusal surface of the first anatomical structure.
17. The method of claim 16, further comprising orienting and resizing the defined first clipping surface to have a same scale and orientation as the 3D volumetric data set.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The above and/or additional objects, features and advantages of the present disclosure, will be further elucidated by the following illustrative and non-limiting detailed description of embodiments of the present disclosure, with reference to the appended drawings, wherein
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10) In the following description, reference is made to the accompanying figures, which show by way of illustration how the disclosure may be practiced.
(11)
(12) The 3D volumetric data 100 illustrated in
CT number=K*(u.sub.voxel−u.sub.water)/u.sub.water
where u.sub.voxel and u.sub.water are the calculated voxel attenuation coefficient and the attenuation coefficient of water, respectively, and K is an integer constant. The 2D projection is generated using ray tracing where rays are traced from the chosen viewpoint through the 3D volumetric data for each pixel in a virtual screen. The final pixel color is the result of accumulating (front to back) the color from each voxel that the ray intersects when moving through the volume. To determine the color of each voxel a color function is used, which translates the voxel “intensity” to a color. Using such a color function allows for air voxels to be regarded as (semi-)transparent, as well as assigning the desired colors to, e.g., skin, bone and teeth.
(13) The CT data may e.g. be acquired for planning the position of an implant in the patient's lower jaw and the operator wishes to view a volume rendering of the data from teeth and jaw bone in this jaw only. Commercially available software for handling volume rendering of CT scan data often allow the operator to select a volume for the rendering. This volume can be indicated by clipping planes clipping through the scan data relating to the upper and lower jaws and/or a simple bounding box enclosing the relevant volume.
(14) When the CT data are recorded while the patient's teeth are in occlusion, such a bounding box or clipping plane cannot provide the correct separation of data for many patients. This may be with respect to the anterior teeth where the upper anterior teeth extend below the occlusal surfaces of several teeth in the lower jaw. Or at the occlusal surfaces of several or the pre-molar or molar teeth, where often no clipping plane which fully separates the occlusal surfaces of the teeth in the upper and lower jaws can be defined.
(15)
(16) In
(17)
(18)
(19) The surface scan 311 illustrated in
(20)
(21)
(22) The first clipping surface is planar and is arranged like the first clipping plane 205a illustrated in
(23)
(24) The bounding volume 418 with the tooth structured first clipping surface is also depicted in
(25) The structured first clipping surface which at least in one region is shaped according to the topography of the first portion of the teeth has the advantage that the volume rendering more precisely can select the appropriate 3D volumetric data for the volume rendering as described below in relation to
(26)
(27)
(28) In
(29) The improvement is also clearly seen in the 2D projection 527 of
(30)
(31)
(32) The 2D projection 624 is generated by using the improved first clipping plane in the volume rendering to select 3D volumetric data relating to dental structures in the lower jaw only. In addition to displaying the 2D projection, the user interface also shows a panoramic view 631 and three 2D slices providing an axial view 632, an orthogonal view 633 and a tangential view 634 of the 3D volumetric data set. This Figure illustrates that the disclosed method provides the advantage that all 3D volumetric data are maintained and can be represented in different views 631, 632, 633 and 634 together with the generated 2D projection 624.
(33)
(34) In step 741 a 3D volumetric data set of the patient's teeth and jaw bones is obtained. The 3D volumetric data may be provided by X-ray Computed Tomography scanning and loaded into a microprocessor of a data processing system configured for implementing the method.
(35) In step 742 a surface scan of teeth in lower jaw is obtained, e.g. by intra oral scanning using a TRIOS scanner supplied by 3Shape A/S and loaded into the microprocessor. The surface scan comprises data expressing the topography of the teeth in the lower jaw.
(36) In step 743 a first clipping surface is defined from the obtained surface scan. The first clipping surface can be defined part of a bounding volume and formed by replacing a portion of a bounding box with the surface scan as illustrated in
(37) In Step 744 a 2D projection of 3D volumetric data of lower jaw is generated by applying the first clipping plane in a volume rendering of the 3D volumetric data. When the first clipping surface is part of a bounding volume, the bounding volume is arranged such that the first clipping surface follows the teeth while the second clipping surface of the volume is located opposite to the volumetric data of the lower jaw.
(38)
(39) The computer device 851 can receive both a surface scan and a 3D volumetric data set of the patient's teeth which both can be stored in the computer readable medium 852 and loaded to the microprocessor 853 for processing. The surface scan can be obtained as a digital 3D representation of the teeth recorded for example using an intraoral scanner 857, such as the TRIOS 3 intra-oral scanner manufactured by 3Shape TRIOS A/S. The 3D volumetric can be recorded using e.g. a cone beam CT scanner 858.
(40) A computer program product with computer instructions for causing the microprocessor to perform several of the steps of the inventive method is stored on the non-transitory computer readable medium 852. For example, the computer program product can have algorithms for manipulating and aligning surface scan and 3D volumetric data set, and for performing the ray tracing used in the volume rendering to produce the 2D projection. The computer system provides for the execution of the method steps by which the obtained can be, either automatically or in response to operator commands.
(41) In case of a user assisted alignment of the surface scan and the 3D volumetric data, the system 850 provides that an operator can arrange the surface scan and the 3D volumetric data according to the spatial arrangement which best reflects to anatomical correct arrangement using e.g. a computer mouse to drag or rotate visualizations of the surface scan and the 3D volumetric data on the visual display unit 856. When the operator is satisfied with the relative arrangement he activates a virtual push button in the user interface and the spatial relationship is stored in the computer readable medium 852. The computer readable medium 852 can also have instructions for performing the alignment automatically, e.g. such as ICP based algorithms.
(42) Although some embodiments have been described and shown in detail, the disclosure is not restricted to them, but may also be embodied in other ways within the scope of the subject matter defined in the following claims. In particular, it is to be understood that other embodiments may be utilized and structural and functional modifications may be made without departing from the scope of the present disclosure.
(43) A claim may refer to any of the preceding claims, and “any” is understood to mean “any one or more” of the preceding claims.
(44) It should be emphasized that the term “comprises/comprising” when used in this specification is taken to specify the presence of stated features, integers, steps or components but does not preclude the presence or addition of one or more other features, integers, steps, components or groups thereof.
(45) The features of the method described above and in the following may be implemented in software and carried out on a data processing system or other processing means caused by the execution of computer-executable instructions. The instructions may be program code means loaded in a memory, such as a RAM, from a storage medium or from another computer via a computer network. Alternatively, the described features may be implemented by hardwired circuitry instead of software or in combination with software.