System and method for handling image data
10467761 ยท 2019-11-05
Assignee
Inventors
Cpc classification
International classification
Abstract
A data processing unit receives a reference image (IMG1.sub.3D) of a deformable physical entity, a target image (IMG2.sub.3D) of said physical entity, and a first region of interest (ROI1.sub.3D) defining a first volume in the reference image (IMG1.sub.3D) representing a reference image element. The reference image (IMG1.sub.3D), the target image (IMG2.sub.3D) and the first region of interest (ROI1.sub.3D) all contain 3D datasets. In response to user commands (c1; c2), the data processing unit defines a first contour (C1.sub.2D) in a first plane through the target image (IMG2.sub.3D), which is presented to a user via a display unit together with graphic data reflecting the reference image (IMG1.sub.3D), the target image (IMG2.sub.3D) and the first region of interest (ROI1.sub.3D). The first contour (C1.sub.2D) is aligned with at least a portion of a first border (IEB1) of a target image element (IE.sub.3D) in the target image (IMG2.sub.3D). The target image element (IE.sub.3D) corresponds to the reference image element in the reference image (IMG1.sub.3D). Based on the first contour (C1.sub.2D), the target image (IMG2.sub.3D) and the first region of interest (ROI1.sub.3D); the data processing unit determines a second region of interest (ROI2.sub.3D) defining a second volume in the target image (IMG2.sub.3D).
Claims
1. An image handling system, comprising: a data processing unit configured to receive: a reference image of a deformable physical entity, a target image of said physical entity, and a first region of interest defining a first volume in the reference image, which first volume represents a reference image element, the reference image, the target image, and the first region of interest containing a respective three-dimensional dataset; at least one data input unit configured to receive user commands; and a display unit configured to present graphic data reflecting the reference image, the target image, and the first region of interest, wherein the data processing unit is further configured to: define, in response to the user commands, a first contour in a first plane through the target image, the first contour being aligned with at least a portion of a first border of a target image element in the target image, the target image element corresponding to the reference image element in the reference image; and determine a second region of interest defining a second volume in the target image, the second region of interest being determined based on the first contour, the target image, and the first region of interest.
2. The image handling system according to claim 1, wherein the data processing unit is further configured to compute a vector field describing a relationship between the first region of interest and the second region of interest, the second region of interest being obtainable by transforming the first region of interest via the vector field.
3. The image handling system according to claim 1, wherein the data processing unit is further configured to: generate the second region of interest based on the first region of interest and the vector field; and produce graphic data for presentation on the display unit, which graphic data reflect the second region of interest overlaid on the target image.
4. The image handling system according to claim 1, wherein the data processing unit is further configured to: receive additional user commands, and in response thereto, define a second contour in a second plane through the target image, the second contour being aligned with at least a portion of a second border of the target image element in the target image; and determine the second region of interest on the further basis of the second contour.
5. The image handling system according to claim 1, wherein the data processing unit is configured to determine the second region of interest based on a non-linear optimization algorithm applied to the first contour and an intersection between the second region of interest and the first plane, the non-linear optimization algorithm being configured to penalize deviation of the second region of interest from the first contour.
6. The image handling system according to claim 5, wherein the second region of interest is represented by a triangular mesh, and the non-linear optimization algorithm involves: computing a set of intersection points between the second region of interest and the first plane, each intersection point in the set of intersection points being computed by means of a convex combination of eight voxel centers being adjacent to the intersection point using mean value coordinates; and applying a two-dimensional distance transform on a Euclidean distance between each computed intersection point and the first contour.
7. The image handling system according to claim 5, wherein the second region of interest is represented by a triangular mesh, and the non-linear optimization algorithm involves: computing a set of intersection points between the second region of interest and the first plane, and for each intersection point in the set of intersection points: determining a normal projection from the second region of interest towards the first plane, the normal projection extending in an interval of predetermined length, and if within the predetermined length the normal intersects with the first contour at a juncture, the juncture is included as a tentative delimitation point of an updated second region of interest; and repeating the determining step based on the updated second region of interest until a stop criterion is fulfilled.
8. A method of handling images, the method comprising: receiving a reference image of a deformable physical entity; receiving a target image of said physical entity; receiving a first region of interest defining a first volume in the reference image, which first volume represents a reference image element, the reference image, the target image, and the first region of interest containing a respective three-dimensional dataset; receiving user commands via at least one data input unit; presenting graphic data on a display unit, the graphic data reflecting the reference image, the target image, and the first region of interest; defining, in response to the user commands, a first contour in a first plane through the target image, the first contour being aligned with at least a portion of a first border of a target image element in the target image, the target image element corresponding to the reference image element in the reference image; and determining a second region of interest defining a second volume in the target image, the second region of interest being determined based on the first contour, the target image, and the first region of interest.
9. The method according to claim 8, further comprising computing a vector field describing a relationship between the first region of interest and the second region of interest, the second region of interest being obtainable by transforming the first region of interest via the vector field.
10. The method according to claim 8, further comprising: generating the second region of interest based on the first region of interest and the vector field; and producing graphic data for presentation on the display unit, which graphic data reflect the second region of interest overlaid on the target image.
11. The method according to claim 8, further comprising: receiving additional user commands, and in response thereto, defining a second contour in a second plane through the target image, the second contour being aligned with at least a portion of a second border of the target image element in the target image; and determining the second region of interest on the further basis of the second contour.
12. The method according to claim 8, comprising determining the second region of interest based on a non-linear optimization algorithm applied to the first contour and an intersection between the second region of interest and the first plane, the non-linear optimization algorithm being configured to penalize deviation of the second region of interest from the first contour.
13. The method according to claim 12, wherein the second region of interest is represented by a triangular mesh, and the method involves: computing a set of intersection points between the second region of interest and the first plane, each intersection point in the set of intersection points being computed by means of a convex combination of eight voxel centers being adjacent to the intersection point using mean value coordinates; and applying a two-dimensional distance transform on a Euclidean distance between each computed intersection point and the first contour.
14. The method according to claim 12, wherein the second region of interest is represented by a triangular mesh, and the method involves: computing a set of intersection points between the second region of interest and the first plane, and for each intersection point in the set of intersection points: determining a normal projection from the second region of interest towards the first plane, the normal projection extending in an interval of predetermined length, and if within the predetermined length the normal intersects with the first contour at a juncture, the juncture is included as a tentative delimitation point of an updated second region of interest; and repeating the determining step based on the updated second region of interest until a stop criterion is fulfilled.
15. A non-transitory processor-readable medium comprising instructions which, when executed by at least one processor, cause the at least one processor to perform the method according to claim 8.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The invention is now to be explained more closely by means of preferred embodiments, which are disclosed as examples, and with reference to the attached drawings.
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DESCRIPTION OF PREFERRED EMBODIMENTS OF THE INVENTION
(10) Initially, we refer to
(11) The proposed image handling system 100 includes a data processing unit 110, at least one data input unit 131 and 132 and a display unit 140.
(12) The data processing unit 110 is configured to receive a reference image IMG1.sub.3D of a deformable physical entity, e.g. representing an organ or a body structure of a patient. The reference image IMG1.sub.3D is a 3D dataset, typically containing a relatively large number of voxels that may have been registered by an X-ray computer tomograph, a magnetic resonance equipment (e.g. using magnetic resonance imaging, MRI, nuclear magnetic resonance imaging, NMRI, or magnetic resonance tomography, MRT), an ultrasonic camera or a cone beam computed tomography, CBCT, scanner.
(13) The data processing unit 110 is also configured to receive a target image IMG2.sub.3D of the physical entity, which target image IMG2.sub.3D likewise is a 3D dataset, which typically contains a relatively large number of voxels, for example registered by an X-ray computer tomograph, a magnetic resonance equipment (e.g. using magnetic resonance imaging, MRI, nuclear magnetic resonance imaging, NMRI or magnetic resonance tomography, MRT) or an ultrasonic camera, however not necessarily the same equipment, or same type of equipment, that was used for generating the reference image IMG1.sub.3D.
(14) Additionally, the data processing unit 110 is configured to receive a first region of interest ROI1.sub.3D defining a first volume in the reference image IMG1.sub.3D. The first region of interest ROI1.sub.3D represents a reference image element defining a particular region on the reference image IMG1.sub.3D, for example corresponding to the delimitation boundaries of an individual organ, an organ system, a tissue, or some other body structure of a patient. Similar to the reference and target images IMG1.sub.3D and IMG2.sub.3D respectively the first region of interest ROI1.sub.3D is a 3D dataset that may be represented by voxels. However, the first region of interest ROI1.sub.3D is normally a dataset that has been manually defined by a human operator, e.g. a radiologist. Irrespective of the specific origin, the first region of interest ROI1.sub.3D, the reference image IMG1.sub.3D and the target image IMG2.sub.3D are fed into the data processing unit 110 via one or more data interfaces.
(15) The display unit 140 is configured to present graphic data GD reflecting the reference image IMG1.sub.3D, the target image IMG2.sub.3D and the first region of interest ROI1.sub.3D. Thus, a user, for example a radiologist, may visually inspect the image data, preferably interactively as seen from selected views, by entering commands via the at least one data input unit 131 and 132, which may be represented by any known input member for generating user commands to a computer, e.g. a keyboard 131 and/or a computer mouse 132.
(16) The at least one data input unit 131 and 132 is configured to receive user commands c1 and c2 respectively. In response to the user commands c1 and/or c2, the data processing unit 110 is configured to define a first contour C1.sub.2D in a first plane through the target image IMG2.sub.3D, preferably corresponding to a view of the target image IMG2.sub.3D presented on the display unit 140. Here, we presume that the user commands c1 and/or c2 are generated such that the first contour C1.sub.2D is aligned with at least a portion of a first border IEB1 of a target image element IE.sub.3D (e.g. the outline of a specific organ) in the target image IMG2.sub.3D. In any case, the target image element IE.sub.3D corresponds to the reference image element in the reference image IMG1.sub.3D.
(17) The data processing unit 110 is further configured to determine a second region of interest ROI2.sub.3D defining a second volume in the target image IMG2.sub.3D. According to the invention, the second region of interest ROI2.sub.3D is determined based on the first contour C1.sub.2D, the target image IMG2.sub.3D and the first region of interest ROI1.sub.3D.
(18)
(19) According to one embodiment of the invention, the data processing unit 110 is further configured to compute a vector field VF.sub.1.fwdarw.2 describing a relationship between the first region of interest ROI1.sub.3D and the second region of interest ROI2.sub.3D. The vector field VF.sub.1.fwdarw.2 has such properties that the second region of interest ROI2.sub.3D is obtainable by transforming the first region of interest ROI1.sub.3D via the vector field VF.sub.1.fwdarw.2. In other words, the second region of interest ROI2.sub.3D can be generated by for example multiplying the first region of interest ROI1.sub.3D with the vector field VF.sub.1.fwdarw.2.
(20) Further preferably, the data processing unit 110 is configured to generate the second region of interest ROI2.sub.3D based on the first region of interest ROI1.sub.3D and the vector field VF.sub.1.fwdarw.2. Then, the data processing unit 110 is preferably configured to produce graphic data GP for presentation on the display unit 140 so that the graphic data GP reflect the second region of interest ROI2.sub.3D overlaid on the target image IMG2.sub.3D. Consequently, a user may double check whether or not the vector field VF.sub.1.fwdarw.2 (and thus also the second region of interest ROI2.sub.3D) is a sufficiently accurate definition of the organ, organ system, tissue, body structure etc. in the target image IMG2.sub.3D. Should the vector field VF.sub.1.fwdarw.2 prove to be unacceptably imprecise, it is desirable if the user has a means to improve the data quality.
(21) To this aim, according to one embodiment of the invention, the data processing unit 110 is further configured to receive additional user commands c1 and/or c2 via the at least one data input unit 131 and/or 132 respectively. In response thereto, the data processing unit 110 is configured to define a second contour C2.sub.2D in a second plane P2 through the target image IMG2.sub.3D as illustrated in
(22) When correcting/adjusting the second region of interest ROI2.sub.3D as described above, the data processing unit 110 may apply one or more of the strategies that will be described below with reference to
(23)
(24) As is common practice in computer graphics as well as in computer aided image processing of medical data, we presume that the second region of interest ROI2.sub.3D is represented by a triangular mesh. Preferably, the same is true also for the first region of interest ROI1.sub.3D. Of course, regardless of how the above-mentioned first or second plane is oriented, many of the intersection points between the second region of interest ROI2.sub.3D and the first or second plane will occur at points different from one of the corners of a triangle in the triangular-mesh representation. In other words, the intersection line will miss numerous voxel centers of the vector field describing the second region of interest ROI2.sub.3D. Therefore, the specific intersection points must be calculated.
(25) According to one embodiment of the invention, this calculation is formulated as a non-linear optimizing problem including a term which penalizes deviation from the contour (i.e. C1.sub.2D or C2.sub.2D). Here, a two dimensional distance transform is used as follows:
(26) We assume that a contour C1.sub.2D or C2.sub.2D has been defined in a plane P1 or P2 for the second region of interest ROI2.sub.3D, which, in turn, is represented by a triangular mesh, and the plane P1 or P2 intersects the second region of interest ROI2.sub.3D. We define a set of edges of the second region of interest ROI2.sub.3D, where intersection occurs as E.
(27) For each edge in E, we compute the intersection point with the plane P1 or P2. As mentioned above, the resulting set of intersection points are typically not located at the voxel centers of the vector field. In order to express the intersection points in terms of the vector field, each intersection point V0 is computed by means of a convex combination of eight voxel centers V1, V2, V3, V4, V5, V6, V7 and V8 being adjacent to the intersection point V0 using mean value coordinates. We call such a point a virtual point v.sub.i, where:
(28)
(29) A distance transform D(x) is computed for the contour C1.sub.2D or C2.sub.2D in the plane P1 or P2, such that D(x)=0 on the contour and >0 otherwise. D(x) thereby approximates the Euclidean distance to the contour C1.sub.2D or C2.sub.2D.
(30) A non-linear term that penalizes deviation from the contour C1.sub.2D or C2.sub.2D may now be written:
(31)
(32)
(33) Also in this case, the second region of interest ROI2.sub.3D is represented by a triangular mesh. Here the non-linear term of the objective function due to the contour C1.sub.2D or C2.sub.2D is unchanged during a major iteration, and updated between major iterations. Using the terminology from the above-described strategy, the difference is that for each intersection point v.sub.i between the contour C1.sub.2D or C2.sub.2D and the plane P1 or P2 a normal N.sub.i is computed by interpolation of the vertex normal at the edge corners.
(34) The normal N.sub.i is then projected onto the plane P1 or P2 and a search along the projected normal in an interval of length L is performed. If an intersection point t.sub.i with the contour C1.sub.2D or C2.sub.2D is found this is added to the non-linear function:
(35)
(36) Here, the weight w.sub.i may either be 1, or the weight w.sub.i may depend on an intersection angle with the contour C1.sub.2D or C2.sub.2D in such a way that an almost orthogonal intersection results in a relatively high weight and an almost parallel intersection results in a relatively low weight.
(37)
(38) A stop criterion for the iteration is defined, which stop criterion preferably is chosen from heuristics. For example, the stop criterion may be considered to be fulfilled if the number of new intersection points t.sub.i decreases (i.e. becomes lower in a subsequent iteration i+1), and/or if the number of intersection points t.sub.i begin to remain approximately the same from one iteration to another.
(39) The data processing unit 110 preferably contains, or is in communicative connection with a memory unit 115 storing a computer program product SW, which contains software for making at least one processor in the data processing unit 110 execute the above-described actions when the computer program product SW is run on the at least one processor.
(40) In order to sum up, and with reference to the flow diagram in
(41) A first step 810 checks if a reference image IMG1.sub.3D of a deformable physical entity has been received; and if so, a step 820 follows. Otherwise, the procedure loops back and stays in step 810. The reference image IMG1.sub.3D is a 3D dataset, for example represented by a relatively large number of voxels registered by a computer tomograph or similar equipment.
(42) Step 820 checks if a target image IMG2.sub.3D of the deformable physical entity has been received, i.e. another image of the same object/subject as represented by the reference image IMG1.sub.3D. If, in step 820 a target image IMG2.sub.3D is received, a step 830 follows. Otherwise the procedure loops back and stays in step 820. The target image IMG2.sub.3D is a 3D dataset, for example represented by a relatively large number of voxels registered by a computer tomograph or similar equipment.
(43) Step 830 checks if user commands have been received via one or more data input units (e.g. a computer mouse and/or a keyboard), which user commands are presumed to be entered aiming at defining a first contour C1.sub.2D in a first plane through the target image IMG2.sub.3D. If such user commands are received, a step 840 follows. Otherwise the procedure loops back and stays in step 830.
(44) Step 840 checks if a first region of interest ROI1.sub.3D has been received, and if so a step 850 follows. Otherwise, the procedure loops back and stays in step 840. The first region of interest ROI1.sub.3D defines a first volume in the reference image IMG1.sub.3D, which first volume represents a reference image element, for instance a particular organ/structure in a patient. The first region of interest ROI1.sub.3D is a 3D dataset, preferably represented by voxels that may have been manually defined by an operator.
(45) It should be noted that the exact order of steps 810 to 840 is not critical, and may be varies according to the invention provided that the user commands are received after the target image IMG2.sub.3D. Namely, the user commands are entered based on the target image IMG2.sub.3D.
(46) In step 850, in response to the user commands, a first contour C1.sub.2D is defined in a first plane through the target image IMG2.sub.3D. The first contour C1.sub.2D is presumed to be aligned with at least a portion of a border IEB1 or IEB2 of a target image element IE.sub.3D in the target image IMG2.sub.3D. The target image element IE.sub.3D corresponds to the reference image element in the reference image IMG1.sub.3D.
(47) Subsequently, in a step 860, a second region of interest ROI2.sub.3D is determined, which defines a second volume in the target image IMG2.sub.3D. The second region of interest ROI2.sub.3D is determined based on the first contour C1.sub.2D, the target image IMG2.sub.3D and the first region of interest ROI1.sub.3D. Preferably, in connection with determining the second region of interest ROI2.sub.3D, graphic data GD are presented on a display unit, which the graphic data GD reflect the target image IMG2.sub.3D and the second region of interest ROI2.sub.3D.
(48) Thereafter, the procedure ends. However, according to preferred embodiments of the invention, the user is provided with an input interface via which he/she may enter additional commands for adjusting any mismatching between the second region of interest ROI2.sub.3D and the deformable physical entity in the target image IMG2.sub.3D, for example by defining a second contour C2.sub.2D in a second plane P2 through the target image IMG2.sub.3D.
(49) All of the process steps, as well as any sub-sequence of steps, described with reference to
(50) The term comprises/comprising when used in this specification is taken to specify the presence of stated features, integers, steps or components. However, the term does not preclude the presence or addition of one or more additional features, integers, steps or components or groups thereof.
(51) The invention is not restricted to the described embodiments in the figures, but may be varied freely within the scope of the claims.