SYSTEM AND METHOD FOR NAVIGATING A TOMOSYNTHESIS STACK INCLUDING AUTOMATIC FOCUSING
20230230679 · 2023-07-20
Assignee
Inventors
- Jin-Long Chen (Santa Clara, CA, US)
- Haili Chui (Fremont, CA, US)
- Kevin Kreeger (Sunnyvale, CA, US)
- Xiangwei Zhang (Fremont, CA, US)
Cpc classification
G06F3/04842
PHYSICS
G06F3/167
PHYSICS
A61B6/5205
HUMAN NECESSITIES
A61B6/5211
HUMAN NECESSITIES
G06T3/40
PHYSICS
A61B6/463
HUMAN NECESSITIES
International classification
A61B6/02
HUMAN NECESSITIES
A61B6/00
HUMAN NECESSITIES
G06F3/04842
PHYSICS
G06T3/40
PHYSICS
Abstract
A system and method for reviewing a tomosynthesis image data set comprising volumetric image data of a breast, the method comprising, in one embodiment, causing an image or a series of images from the data set to be displayed on a display monitor and selecting or indicating through a user interface an object or region of interest in a presently displayed image of the data set, thereby causing an image from the data set having a best focus measure of the user selected or indicated object or region of interest to be automatically displayed on the display monitor.
Claims
1. (canceled)
2. A method for navigating and displaying breast tissue images, the method comprising: obtaining a tomosynthesis image data set, the data set comprising volumetric image data of at least a portion of a breast; displaying a series of images from the data set on a display of a computer-controlled workstation; detecting, through a user interface associated with the computer-controlled workstation, an input by a user selecting or indicating a region of interest in an image that is one of the series of images and currently displayed on the display; highlighting on the display, in response to the input by the user, the region of interest with a visual indicia comprising a geometric shape; and displaying on the display other images of the series while continuing to highlight the region of interest with the visual indicia in each of other images of the series as the each of the images is displayed, wherein the highlighting the region of interest comprises modifying at least one visual characteristic of the geometric shape of the visual indicia based on respective focus measures of the region of interest in the respective ones of the series of images being displayed.
3. The method of claim 2, wherein the respective focus measures comprising respective single focus measure values calculated for the region of interest in the respective images.
4. The method of claim 3, wherein each of the respective focus measures is computed based upon one or more of: a sharpness of detected edges of the region of interest, a contrast of the region of interest, and a ratio between a measured magnitude of one or more high frequency components and a measured magnitude of one or more low frequency components.
5. The method of claim 2, wherein each of the displayed images of the image data set is a tomosynthesis reconstructed image, detecting user input comprises detecting user input selecting or indicating the region of interest in a currently displayed tomosynthesis reconstructed image, and each additional image that is displayed while continuing to highlight the region of interest in respective images is a tomosynthesis reconstructed image of the image data set.
6. The method of claim 2, wherein at least one visual characteristic of the geometric shape is a size of the geometric shape.
7. The method of claim 3, wherein at least one visual characteristic of the geometric shape is a size of the geometric shape, wherein the size of the geometric shape decreases with increasing focus measure value.
8. The method of claim 2, wherein the geometric shape comprises a rectangular shape.
9. The method of claim 2, wherein the user selected or indicated region of interest is highlighted in a manner indicating that the region includes a specified type of tissue structure.
10. The method of claim 2, wherein the display the images of the series of images comprises displaying the images in succession.
11. A system for navigating and displaying breast tissue images, the workstation comprising: an operatively associated user interface; a display; at least one processor; and a memory coupled to the at least one processor, the memory comprising computer executable instructions that, when executed by the at least one processor, perform a method comprising: obtaining a tomosynthesis image data set, the data set comprising volumetric image data of at least a portion of a breast; displaying a series of images from the data set on the display; detecting, through the user interface, an input by a user selecting or indicating a region of interest in an image that is one of the series of images and currently displayed on the display; highlighting on the display, in response to the input by the user, the region of interest with a visual indicia comprising a geometric shape; and displaying on the display other images of the series while continuing to highlight the region of interest with the visual indicia in each of other images of the series as the each of the images is displayed, wherein the highlighting the region of interest comprises modifying at least one visual characteristic of the geometric shape of the visual indicia based on respective focus measures of the region of interest in the respective ones of the series of images being displayed.
12. The system of claim 11, wherein the respective focus measures comprising respective single focus measure values calculated for the region of interest in the respective images.
13. The system of claim 12, wherein each of the focus measures are computed based upon one or more of: a sharpness of detected edges of the region of interest, a contrast of the region of interest, and a ratio between a measured magnitude of one or more high frequency components and a measured magnitude of one or more low frequency components.
14. The system of claim 13, wherein the user selected region of interest is highlighted in a manner indicating that the region includes a specified type of tissue structure.
15. The system of claim 13, wherein the images of the series of images are displayed in succession.
16. The system of claim 11, wherein geometric shape comprises a rectangular shape.
17. The method of claim 11, wherein the at least one visual characteristic of the geometric shape is a size of the geometric shape.
18. An automated method employing a computer-controlled workstation for navigating and displaying breast tissue images, the workstation comprising an operatively associated user interface and display, the method comprising: obtaining a tomosynthesis image data set, the data set comprising volumetric image data of at least a portion of a breast; displaying a series of images from the data set on the display; detecting, through the user interface, an input by a user selecting or indicating a region of interest in an image that is one of the series of images and currently displayed on the display; highlighting on the display, in response to the input by the user, the region of interest with a visual indicia comprising a geometric shape; and displaying on the display other images of the series while continuing to highlight the region of interest with the visual indicia in each of other images of the series as the each of the images is displayed, wherein the highlighting the region of interest comprises modifying at least one visual characteristic of the geometric shape of the visual indicia based on respective focus measures of the region of interest in the respective ones of the series of images being displayed.
19. The method of claim 18, wherein the respective focus measures are computed based upon one or more of: a sharpness of detected edges of the region of interest, a contrast of the region of interest, and a ratio between a measured magnitude of one or more high frequency components and a measured magnitude of one or more low frequency components.
20. The method of claim 19, wherein: the respective focus measures comprising respective single focus measure values calculated for the region of interest in the respective images; and the modifying at least one visual characteristic of the geometric shape of the visual indicia comprises the least one visual characteristic of the geometric shape for the image having the highest focus measure value different from the at least one visual characteristic of the geometric shape for all other images.
Description
BRIEF DESCRIPTION OF FIGURES
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
DETAILED DESCRIPTION OF THE ILLUSTRATED EMBODIMENTS
[0029] In describing the depicted embodiments of the disclosed inventions illustrated in the accompanying figures, specific terminology is employed for the sake of clarity and ease of description. However, the disclosure of this patent specification is not intended to be limited to the specific terminology so selected, and it is to be understood that each specific element includes all technical equivalents that operate in a similar manner. It is to be further understood that the various elements and/or features of different illustrative embodiments may be combined with each other and/or substituted for each other wherever possible within the scope of this disclosure and the appended claims.
Tomosynthesis Imaging Acquisition and Computation
[0030] In order to provide additional background information, reference is made to
[0031]
[0032]
[0033] Also illustrated in
[0034] In theory, the number of different predetermined heights h.sub.m for which distinct two-dimensional tomosynthesis reconstructed images Tr.sub.m(x,y) can be generated is arbitrarily large, because h.sub.m is simply a selectable parameter fed to the reconstruction (backprojection) algorithm. In practice, because the ultimate amount of useful information is limited by the finite count of “N” projection images, the tomosynthesis reconstruction geometry is usually limited to a predetermined number “M” of reconstructed image arrays Tr.sub.m(x,y). Preferably, the number “M” is selected such that the reconstructed image arrays Tr.sub.m(x,y) uniformly fill out the vertical extent of the imaged breast volume between the lower and upper compression plates, at a vertical spacing (such as 1 mm) that is small enough to capture smaller-sized micro-calcifications.
[0035] The lateral extent of each tomosynthesis reconstruction image Tr.sub.m(x,y), can be similar to that of each projection image Tp.sub.φn(x,y), i.e., the number of pixels and the spatial resolution of the tomosynthesis reconstructed images Tr.sub.m(x,y) can be similar as for the projection images Tp.sub.φn(x,y). However, such correspondence is not required, with supersampling, subsampling, or other resampling being available for various reasons. For example, the particular geometries of different tomosynthesis reconstruction algorithms could be different from each other, in which case such resampling is incorporated therein as needed to cause the resultant arrays that will be being compared, added, multiplied, mapped, or otherwise jointly processed to be in registration with each other. Depending on the particular tomosynthesis reconstruction algorithm being used, the lateral resolution of the different tomosynthesis reconstructed images Tr.sub.m(x,y) can be different for different levels, for example, the uppermost level could be 95 μm per pixel while the lowermost level be 108 μm per pixel.
[0036] As used herein, three-dimensional geometry of the imaged breast volume refers to a space-limited three-dimensional grid having a defined number of levels that extends at least throughout a clinically relevant portion of the breast (for example, including the breast parenchyma but excluding the skin and the empty space around the breast between the compression plates). In the event only a single predefined tomosynthesis reconstruction algorithm is involved, the three-dimensional geometry of the imaged breast volume can be based upon the number of levels in that predefined tomosynthesis reconstruction algorithm. In the event multiple predefined tomosynthesis reconstruction algorithms are involved having different geometries, the three-dimensional geometry of the imaged breast volume can be based on one of them, with resampling being incorporated into the others to result in appropriate registration. Alternatively, the three-dimensional geometry of the imaged breast volume could be based on the tomosynthesis reconstruction algorithms that were, or will be, used to generate the “for presentation” tomosynthesis reconstructed images.
Navigation and Review of Displayed Tomosynthesis Image Set
[0037] Preferred embodiments of a tomosynthesis workstation employing an automated focus capability according to the presently disclosed inventions will now be described. It is to be appreciated by those skilled in the art that the particular components of the review workstation are described in a very basic (generic) fashion, and that the inventions disclosed herein may be practiced on any of a wide number, type and variety of computer (processor) controlled workstations, which are common place as of this time. As used herein, the terms “user” and “reviewer” are intended to be used interchangeably.
[0038] In particular, an exemplary system for navigating and reviewing a tomosynthesis image data set includes an image processor (e.g., a computer), an image display monitor operatively associated with the image processor; and a user interface operatively coupled to the image processor and display monitor, wherein the user interface may in fact comprise in whole or part the display monitor (i.e., in a touch screen device such as a “tablet”, “pod” or other “smart” device). The image processor is configured to display user-selected image slices from a tomosynthesis data set on the display monitor in response to one or more user commands received through the user interface. The image processor is further configured to detect through the user interface a user selection or indication of an object or region of interest in a then-displayed image from the data set (e.g., when the user positions a graphic arrow controlled by a “mouse device” over the respective object or region for a certain amount of time and/or affirmative actuates (e.g., by clicking) same while in that position.
[0039] For purposes of more specific illustration,
[0040] Notably, a round tissue mass 38 is visible in each of the L.sub.MLO and L.sub.CC image slices. It will be appreciated that the particular views of the tissue mass in the respective L.sub.MLO and L.sub.CC image slices differ in both clarity and orientation, since the image slices are taken along different (orthogonal) image planes, i.e., with the L.sub.MLO slice 28 comprising a cross-section taken along the z-axis of a “side view”, and the L.sub.CC slice 24 comprising a cross-section taken along the z-axis of a top-down view that is orthogonal to the z-axis of the L.sub.MLO image set.
[0041] For purposes of simplifying the discussion, the remainder of the present specification refers just to the L.sub.CC (left breast craniocaudal) tomo stack, although the inventive concepts and features described apply equally to the navigation and review of any tomo image stack, as well as for other, non-breast, body tissue image volumes.
[0042]
[0043] In order to provide the perspective of the reviewer,
[0044] In one embodiment, the system detects through the user interface a user selection or indication of an object or region of interest in a then-displayed image from the tomo stack, and in response, displays an image from the data set having a best focus measure of the user selected or indicated object or region of interest. For example,
[0045] In another embodiment, the system detects through the user interface a user selection or indication of an object or region of interest in a then-displayed image from the tomo stack, and in response, displays a series of near-focus images from the data set on the display monitor, the series of near focus images comprising images of the data set having computed focus measure values within a predetermined range of, and including, a best focus measure value computed for any image of the data set depicting the user selected or indicated object or region of interest. For example,
[0046] In one embodiment, such as depicted in
[0047] In another embodiment, the series of near-focus of images are displayed in succession, so as to allow for a dynamic review and comparison of the user selected or indicated object or region of interest in each image of the near-focus series. For example, the images slices 25, 27, 29 and 31 of
[0048] In some embodiments, in order to assist the reviewer, the system employs known image processing techniques to identify different breast tissue objects and structures in the various source images, and the reviewer may (optionally) cause the system to highlight such objects and structures in the respective best focus image and/or near-focus images, in particular, tissue structures comprising or related to abnormal objects, such as micro-calcification clusters, round-or-lobulated masses, spiculated masses, architectural distortions, etc.; as well as benign tissue structures comprising or related to normal breast tissues, such as linear tissues, cysts, lymph nodes, blood vessels, etc. For example, a user selected or indicated object or region of interest is highlighted by a contour line representing a boundary of the highlighted object or region. Furthermore, objects or regions of interest consisting of or including differing types of tissue structures may be highlighted in different manners when they are a respective subject of a focusing process.
[0049] By way of non-limiting illustration,
[0050]
[0051] At step 1004, the reviewer selects or otherwise indicates an interest (i.e., for clinical evaluation) in an object or region of tissue in a then-displayed image slice of the tomo stack. Upon detecting the user selection or indication of an object or region of interest (hereinafter collectively referred to as “ROI” for purposes of describing the processes in
[0052] As explained in greater detail below in conjunction with
[0053]
[0054] In accordance with the presently disclosed inventions, a determination of focus score and, thus, a best focus score, may be accomplished in a number of ways including, at step 1104A, wherein the focus measure is computed according to known image processing techniques based on a sharpness of detected edges of the object or region of interest. For example, the total gradient magnitude of detected edges inside region of interest is a known measure for sharpness of detected edges. Alternatively and/or additionally, at step 1104B, the focus measure may be computed according to known image processing techniques based on a computed contrast of the object or region of interest, wherein the contrast can be defined as the absolute difference of a pixel with its eight neighbors, summed over all the pixels in region of interest. Alternatively and/or additionally, at step 1104C, the focus measure may be computed according to known image processing techniques based on a ratio between a measured magnitude of one or more high frequency components and a measured magnitude of one or more low frequency components, wherein high frequency components correspond to sharp edges of the object in the foreground, while low frequency components correspond to a blur area in the background. A high value of the ratio between a measured magnitude of high frequency components and a measured magnitude of low frequency components indicates that the object or region of interest is in focus.
[0055] Once the focus measure for each slice has been determined, at step 1106, the system may cause to be displayed the image slice having the best focus measure score and/or a series of images having focus measures within a predetermined range, e.g., within 5% of the best focus measure. This process is illustrated for the above-described and illustrated L.sub.CC tomo stack in
[0056] Having described exemplary embodiments, it can be appreciated that the examples described above and depicted in the accompanying figures are only illustrative, and that other embodiments and examples also are encompassed within the scope of the appended claims. For example, while the flow diagrams provided in the accompanying figures are illustrative of exemplary steps; the overall image merge process may be achieved in a variety of manners using other data merge methods known in the art. The system block diagrams are similarly representative only, illustrating functional delineations that are not to be viewed as limiting requirements of the disclosed inventions. Thus the above specific embodiments are illustrative, and many variations can be introduced on these embodiments without departing from the scope of the appended claims.