SYSTEMS AND METHODS FOR GENERATING ENHANCED DIAGNOSTIC IMAGES FROM 3D MEDICAL IMAGE
20220005258 · 2022-01-06
Inventors
- Benoit Jean-Dominique Bertrand Maurice Mory (Medford, MA, US)
- EMMANUEL MOCE SERGE ATTIA (PARIS, FR)
- JEAN-MICHEL ROUET (PARIS, FR)
Cpc classification
A61B8/463
HUMAN NECESSITIES
A61B8/5261
HUMAN NECESSITIES
A61B8/5246
HUMAN NECESSITIES
G01S7/52074
PHYSICS
A61B6/5223
HUMAN NECESSITIES
A61B8/523
HUMAN NECESSITIES
G01S15/8925
PHYSICS
G01S7/52071
PHYSICS
A61B8/483
HUMAN NECESSITIES
International classification
A61B8/00
HUMAN NECESSITIES
Abstract
The present disclosure describes a medical imaging and/or visualization system and method that provide a user interface enabling a user to visualize (e.g., via a volume rendering) a three dimensional (3D) dataset, manipulate the rendered volume to select a slice plane, and generate a diagnostic image at the selected slice plane, which is enhanced by depth colorized background information. The depth colorization of the background image is produced by blending, preferably based on the depth of structures in the volume, two differently colorized volume renderings, and then fusing the background image with a foreground diagnostic image to produce the enhanced diagnostic image.
Claims
1. A medical image data visualization system comprising: an input device connected to a source of 3D medical imaging data; an output device connectable to a display; and a processor connected to the input and the output devices and configured to: receive a 3D dataset representative of a volume of imaged biological tissue; crop the volume at a selected slice plane; generate a foreground image comprising a 2D image of bodily structures at the selected slice plane; generate first and second color volume renderings of the cropped volume from a same viewing perspective, wherein the first and second color volume renderings are associated with respective first and second different color maps; blend the first and second color volume renderings to produce a background image; and combine the foreground image and the background image to produce a composite medical image.
2. The system of claim 1, wherein the processor is configured to produce a first single-channel image and a second single-channel image, and to map, using a 2D color map, pixel values of the first and second single-channel images to a multi-channel image corresponding to the second colored volume rendering.
3. The system of claim 2, wherein the processor is configured to estimate, for each pixel in the first single-channel image, a distance between a viewing plane and a first encountered anatomical structure of the cropped volume, and to encode the estimated distances as the respective pixel values.
4. The system of claim 3, wherein the processor is configured to blend pixel values of each pair of corresponding pixels of the first and second color volume renderings as a function of the estimated distance associated with a given pair of pixels.
5. The system of claim 4, wherein the processor is configured to normalize the estimated distances to a range of values between 0 and 1 prior to blending the first and second color volume renderings.
6. The system of claim 1, wherein the processor is configured to determine a first depth value corresponding to a distance between a viewing plane and a forward most portion of an imaged anatomical structure, a second depth value corresponding to a distance between the viewing plane and an aft most portion of the cropped volume, and to blend the first and second color volume renderings using a blending function based, at least in part, on the first and second depth values.
7. The system of claim 6, wherein the processor is configured to combine the foreground and background images using pixel-wise summation of corresponding pixel values of the foreground and background images.
8. The system of claim 7, wherein the processor is configured to receive user input for adjusting an amount of background and foreground information in the composite medical image, and to scale the pixel-wise summation based on the user input.
9. The system of claim 1, wherein the input device, the output device, and the processor are components of an ultrasound scanner configured to acquire the 3D dataset.
10. A method of visualizing 3D medical image data, the method comprising: displaying a volume rendering of a 3D dataset representative of a volume of imaged biological tissue; cropping the volume responsive to an indication of a selected slice plane; generating a foreground image comprising a 2D slice image at the selected slice plane; generating a first color volume rendering of the cropped volume based on a first color map; generating a second color volume rendering of the cropped volume based on a second different color map; blending the first and second color volume renderings to produce a background image, wherein the blending is based at least in part on estimated depth of structures in the cropped volume; and combining the foreground image and the background image to produce a composite medical image.
11. The method of claim 10, wherein the generating the first color volume rendering comprises applying a physical model of light propagation through biological tissue to the cropped volume and encoding output values from the model in a first multi-channel image.
12. The method of claim 11, wherein the generating the second color volume rendering comprises: producing a single-channel image encoding the estimated depth of structures in the cropped volume; producing a second single-channel image representing a grayscale volume rendering of the cropped volume; and mapping, using a 2D color map, pixel values of the first and second single-channel images to a second multi-channel image that represents the second colored volume rendering.
13. The method of claim 12, wherein producing the second single-channel image includes storing one of the multiple channels of the first color volume rendering as the second single-channel image.
14. The method of claim 10, wherein the displaying a volume rendering of a 3D dataset representative includes positioning the volume in a virtual 3D space in relation to a viewing plane, the method further comprising; determining a first depth value corresponding to a distance between the viewing plane and a forward most portion of an imaged anatomical structure; determining a second depth value corresponding to a distance between the viewing plane and an aft most portion of the cropped volume; and blending the first and second color volume renderings using a blending function based on the first and second depth values.
15. The method of claim 14, wherein the combining the foreground image and the background image includes summing respective pixel values of the foreground and the background image.
16. The method of claim 15, further comprising applying a scaling factor when summing the respective pixel values of the foreground and the background image.
17. The method of claim 16, further comprising receiving user input for adjusting an amount of background and foreground information in the composite medical image, and adjusting the scaling factor based on the user input.
18. The method of claim 10, further comprising acquiring the 3D dataset of medical imaging data using an ultrasound scanner, and wherein the generating the foreground and background images is performed by one or more processors of the ultrasound scanner.
19. (canceled)
20. The method of claim 10, wherein the generating the foreground image comprises averaging imaging data from a plurality of adjacent imaging planes including the slice plane.
21. A non-transitory computer-readable medium comprising executable instructions, which when executed cause one or more processors of medical imaging system to perform the method of claim 10.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0012]
[0013]
[0014]
[0015]
[0016]
[0017]
[0018]
[0019]
[0020]
[0021]
[0022]
DETAILED DESCRIPTION OF EMBODIMENTS
[0023] The following description of certain exemplary embodiments is merely exemplary in nature and is in no way intended to limit the invention or its applications or uses. In the following detailed description of embodiments of the present systems and methods, reference is made to the accompanying drawings which form a part hereof, and in which are shown by way of illustration specific embodiments in which the described systems and methods may be practiced. These embodiments are described in sufficient detail to enable those skilled in the art to practice the presently disclosed systems and methods, and it is to be understood that other embodiments may be utilized and that structural and logical changes may be made without departing from the spirit and scope of the present system. Moreover, for the purpose of clarity, detailed descriptions of certain features will not be discussed when they would be apparent to those with skill in the art so as not to obscure the description of the present system. The following detailed description is therefore not to be taken in a limiting sense, and the scope of the present system is defined only by the appended claims.
[0024] Displaying images of 3D volumes (or 3D datasets) on a 2D screen involves either slicing (e.g., generating one or more MPR views at a specified slice plane through the volume) or displaying a rendering of the volume (also referred to as volume rendering). 2D slice image(s) extracted from a 3D volume can show small details and subtle variations of tissue texture, which may be difficult to provide in a 3D rendered image as they are generally produced from the image data associated only with the slice plane or by averaging image data associated with a small number neighboring planes. Slice images therefore do not provide depth information. Conventional volume renderings on the other hand provide depth information and can therefore enable the visualization and understanding of the 3D shape of an anatomical structure, but may not be sufficiently detailed or accurate for diagnostic measurements. In accordance with the principles of the present disclosure, a system and method for a 3D visualization mode that shows structures at different depths on a single image, including a 3D rendered background and a 2D diagnostic foreground image, are described. In some examples, the techniques described herein may involve enriching a photo-realistic rendering with artificial depth colorization and MPR fusion to provide anatomical context to diagnostic images.
[0025] In accordance with principles of the present invention, a medical visualization and/or imaging system (e.g., an ultrasound system) is configured to produce a diagnostic image (e.g., a planar reconstruction (MPR) image) with depth-colorized photo-realistic context, which may also be referred to herein as enhanced MPR image.
[0026]
[0027] In some embodiments, the source of 3D medical image data 210 may be an ultrasound scanner or a medical scanner of a different modality (e.g., MRI, CT, PET etc.). In some such embodiments, some or all of the components of system 200 may be incorporated into a medical imaging system 240, for example in an ultrasound imaging system as and described further with reference to
[0028] The memory 225 may be configured to store one or more intermediate images 227 produced by processor 220 and used in generating the final composite image. The memory 225 may also store executable instructions and/or parameters 229 (e.g., color maps, scaling factors, etc.) for volume rendering and/or blending image data produced by processor 220. The user interface 230 may include a display device 232 operable to display the image 204 and a user-input device 234 configured to receive user input(s) e.g., for manipulation of the 3D image data and/or image 204. The components and the arrangement thereof shown in
[0029] Referring now also to
[0030] As shown in block 324, the processor 304 may receive user input 312 for manipulating the volume 401 within the virtual 3D space, e.g., to reposition the volume 401 (e.g., adjust the distance and/or orientation of the volume) with respect to the viewing plane 405. As is known, volume rendering algorithms may utilize a physical model of how light intersects and/or reflects from structures in the volume (represented by non-zero data within the 3D dataset) and output this information as either single-channel or multi-channel pixel data for producing a grayscale or color image, respectively. In addition to manipulating the volume 401 to reposition it, the processor 304 may be further configured to receive user input 312 for cropping the volume at a selected slice plane 406. Upon receipt of a selected slice plane, the processor 304 may in effect remove the voxels in front of the slice plane (i.e. between the slice plane 406 and viewing plane 404) and produce a volume rendering of the cropped volume 408 including pixel data representative only of the voxels at and behind the slice plane. The rendering may be updated following any further manipulation of the volume and/or rendering parameters (e.g., light source position, intensity, etc.) by the user.
[0031] Once an indication of a slice plane 406 has been received, the processor 304 is programmed to produce a 2D image at the slice plane, as shown in block 332. This may be done using any known technique for generating 2D medical images such as multiplanar reformatting or reconstruction. The 2D image 334 (e.g., MPR image) is considered a diagnostic image in that it is generally configured to convey a sufficient level of detail of medical imaging information as may be needed by a clinician to make diagnostic decisions. In contrast, the volume rendering produced at block 322 would not typically be considered a diagnostic image as it would not typically provide sufficient level of detail for diagnostic purposes. In some embodiments, and as shown in
[0032] In addition to the 2D diagnostic image, the processor 304 is further programmed to produce a number of additional images (e.g., images 510, 512, 514, and 518 in
[0033] As shown in block 340, the processor 304 is configured to combine the 2D image 334, which provides the foreground information, with the blended color volume rendering 338, which provides the background information to produce a composite medical image 342. The processor 304 may be configured to receive user input, as shown in block 314, to adjust one or more parameters of the blending process. For example, the processor may be configured to provide a user control via the user interface 306 for receiving user input to control the amount of foreground and background information included in the combined image 342. The combined image 342 may be output by processor 304 for storage (e.g., block 318) or display (e.g., block 316), after which the process may terminate (at block 348).
[0034]
[0035] The color image 518 is produced by colorizing a volume rendering of the input dataset 408 according to a first colorization scheme (e.g., 2D color map 516). To produce color image 518, the volume-rendering engine first generates a luminance image 514, which may be a grayscale volume rendering of the 3D dataset. This grayscale volume rendering is stored as a single-channel image 514. The grayscale rendering is then colorized based on the depth of the structures represented therein. This step adds depth cues to the rendering 514. To colorize the rendering 514, another grayscale image is output by the rendering engine—a grayscale depth map image 512, which encodes and stores the estimated depths of the structures in the image in a single-channel image. That is, the depth map image 512 may be generated by estimating, for each pixel in an image, a depth z (or distance from the viewing plane) to the first encountered anatomical structure (or non-zero value in the 3D dataset) along a ray passing through the given pixel. The estimated depth is then encoded and stored as the grayscale value of the given pixel, with darker pixels indicating structures closer to the viewing plane and conversely, lighter pixels indicating structures farther away from the viewing plane.
[0036] As described, the luminance image 514 may be generated by applying a physical model of how light intersects and/or reflects off the bodily structures represented by the 3D dataset and encoding and storing this information (e.g., the estimated reflected light at each given pixel) as the grayscale value of each pixel. The luminance image may be produced by any suitable volume rendering technique capable of outputting a grayscale rendering of a volumetric dataset. As noted, the luminance image 514 is dependent on the viewing perspective and the location of the virtual light source (e.g., 409 in
[0037] As further shown in
[0038]
[0039] The blending block 610 may apply a convex blending algorithm to combine the color data of the two input color images (e.g., images 510 and 518). For example, the image data may be combined according to the function I=(1−Z).T+Z.D, where T is the color image 510 output natively from the volume rendering engine, D is the color image 518 colorized produced by colorizing the luminance only imaged based the depth information, and Z is the normalized depth value for the corresponding pixel. By applying a convex combination using (1−Z) and Z as blending factors, the hue of near-field structures (Z≈0) may be better preserved, gradually becoming more artificial in the far-field (Z≈1), which can improve the color contrast of foreground structures while providing enhanced depth cues to physically-based rendered images.
[0040] As further illustrated, the processor 600 may normalize, in block 612, the estimated depth values (z) encoded in the depth map image 512 as depth values (Z) in the range of Z∈[0,1]. For example, a saturated affine function may be defined in block 612, by which any point in the 3D space for which the volume renderer has estimated a depth below d.sub.min is assigned Z=0 and thus the corresponding RGB values will remain untransformed, while any point that has been estimated to be at a distance beyond dmax will be assigned Z=1, which corresponds to a maximally transformed hue (e.g., blue). Within the range dmm-dmax, hue is gradually (e.g., linearly) altered according to depth. In some embodiments, the values of d.sub.max and d.sub.min may be user-defined (e.g., depending on the specific imaging application (e.g., the type of tissue or organ being imaged) or user preference).
[0041] In one preferred embodiment, the processor 600 may automatically define the scalar-valued function that maps the estimated depth to the range Z∈[0,1], as shown in block 612, and more specifically the d.sub.min and d.sub.max values to be mapped to 0 and 1, respectively. Because a change of viewpoint can significantly affect the observed range of depth in the rendered image, the depth range should preferably be adapted to the position of the camera. The depth range may be automatically defined for any given viewpoint as follows. Generally, the depth range (i.e., d.sub.min and d.sub.max) may be defined based on geometry or based on content. For example, as shown in
[0042] Each of the two techniques (geometry-based or content-based) has advantages and limitations. For example, adapting depth estimation to the actual data tends to be less stable to 3D rotation and image noise that may induce abrupt color changes. Also, opaque but invisible voxels that are occluded can still influence the estimation of the background location, for unintuitive effects on colors. Estimations based only on geometry are faster (no data to interrogate) and potentially more stable. But in general, especially when portions of the volume are empty, they tend to provide too broad a depth range, resulting in insufficient color contrast between structures at different depths. For instance, (a) illustrates that 3D rotations can lead to corner cases where front plane reference are located quite far from any content.
[0043] In some examples, a hybrid strategy, as shown in
[0044]
[0045] In some embodiments, the processor (e.g., processor 304) may be configured to receive user input 711 for adjusting the amount of background and foreground information in the final combined medical image 706. In such embodiments, the processor may be programmed to provide a user control (e.g., a soft control presented on a touch screen interface or other) to allow the user to indicate the amount of desired foreground and background information in the composite image.
[0046] Returning back to
[0047] In some embodiments, a medical image data visualization system in accordance with the principles described herein may be incorporated into an ultrasound scanner or any other type of medical imaging system.
[0048] The ultrasound imaging system 910 may include one or more of the components described above with reference to
[0049] The ultrasound imaging system 910 of
[0050] One of the functions controlled by the transmit controller 920 is the direction in which beams are steered. Beams may be steered straight ahead from (orthogonal to) the transducer array 914, or at different angles for a wider field of view. The partially beamformed signals produced by the microbeamformer 916 are coupled to a main beamformer 922 where partially beamformed signals from individual patches of transducer elements are combined into a fully beamformed signal. The beamformed signals are coupled to a signal processor 926. The signal processor 926 can process the received echo signals in various ways, such as bandpass filtering, decimation, I and Q component separation, and harmonic signal separation. The signal processor 926 may also perform additional signal enhancement such as speckle reduction, signal compounding, and noise elimination. The processed signals are coupled to a B-mode processor 928, which can employ amplitude detection for the imaging of structures in the body. The signals produced by the B-mode processor 928 are coupled to a scan converter 930 and a multiplanar reformatter 932. The scan converter 930 arranges the echo signals in the spatial relationship from which they were received in a desired image format. For instance, the scan converter 930 may arrange the echo signal into a two dimensional (2D) sector-shaped format, or a pyramidal or otherwise shaped three dimensional (3D) format. The multiplanar reformatter 932 can convert echoes which are received from points in a common plane in a volumetric region of the body into an ultrasonic image (e.g., a B-mode image) of that plane, as described in U.S. Pat. No. 6,443,896 (Detmer). The multiplanar reformatter 932 may thus reconstruct a 2D image (an MPR image) from a 3D (volumetric) dataset. The acquired image data may also be coupled to a volume renderer 934, which can convert the echo signals of a 3D dataset into a projected image of the 3D dataset as viewed from a given reference point (also referred to as volume rendering), e.g., as described in U.S. Pat. No. 6,530,885 (Entrekin et al.). The volume renderer 934 may be configured to produce volume renderings and output any other intermediate images, as described herein, for the purpose of producing a composite medical image in accordance with the present disclosure. For example, the volume renderer may be configured to output the images described with reference to
[0051] In some embodiments, the volume renderer 934 may receive input from the user interface 924. The input may include an indication of a selected slice plane, user input for manipulating the volume, e.g., to reposition the volume and/or the light source within the 3D space, or the like. Additionally, the processor 928, which may include or complement the functionality of the volume rendered 934, may also receive inputs to adjust other parameters of the process, for example for setting blending factors, for invoking automatic definition of blending parameters and/or automatic generation of enhanced diagnostic images when in a given visualization (e.g., enhance volume inspection) mode.
[0052] In some embodiments, the processor 928 may include an image processor 236 configured to perform enhancements to the images output from the scan converter 230, multiplanar reformatter 232, and/or volume renderer 234. Images produced by the scan converter 230, multiplanar reformatter 232, and/or volume renderer 234 may be coupled to an image processor 236 for further enhancement, buffering, and temporary storage prior to display on the display unit 238. In some embodiments, the image processor 236 may implement one or more of the functions of the processor described herein, e.g., the blending and fusing functions described with reference to
[0053]
[0054] For example, in some embodiments, generating the first color volume rendering may include applying a physical model of light propagation through biological tissue to the cropped volume (e.g., to assign hues (or pixel values) for each pixel in the rendering) and encoding the output values from the model in a first multi-channel image. This physical model (or volume-rendering engine) may be implemented according to any known technique (e.g., ray casting, splatting, shear-warping rendering, etc.) to natively produce a colored volume rendering of a 3D dataset. In some such embodiments, generating the second color volume rendering may include producing a single-channel image encoding the estimated depth of structures in the cropped volume, producing a second single-channel image representing a grayscale volume rendering of the cropped volume, and mapping, using a 2D color map, pixel values of the first and second single-channel images to a second multi-channel image that represents the second colored volume rendering. In some embodiments, producing the second single-channel image may include storing one of the multiple channels of the first color volume rendering as the second single-channel image.
[0055] In some embodiments, displaying a volume rendering of a 3D dataset representative includes may include positioning the volume in a virtual 3D space in relation to a viewing plane and the method may further include determining a first depth value corresponding to a distance between the viewing plane and a forward most portion of an imaged anatomical structure, determining a second depth value corresponding to a distance between the viewing plane and an aft most portion of the cropped volume, and blending the first and second color volume renderings using a blending function based on the first and second depth values.
[0056] In some embodiments, combining the foreground image and the background image may include summing respective pixel values of the foreground and the background image. In some such embodiments, the method may further include applying at least one scaling factor when summing the respective pixel values of the foreground and the background image, wherein one or more of the scaling factors may be derived based on user inputs. In some such embodiments, the method may include receiving user input for adjusting an amount of background and foreground information in the composite medical image and adjusting at least one scaling factor utilized in the combining of the foreground and background images based on the user input.
[0057] In some embodiments, the steps of a method according to the present disclosure may be performed by an ultrasound scanner and the method may further include acquiring the 3D dataset of medical imaging data using the ultrasound scanner. Consequently, generating the foreground and background images and combining the image to produce a composite medical image may be performed by one or more processors of the ultrasound scanner. In some embodiments, the 3D dataset may include ultrasound, imaging data and the foreground image may include generating an MPR image at a slice plane through the 3D ultrasound data. In some embodiments, generating the foreground image may include averaging imaging data from a plurality of adjacent imaging planes including the slice plane, e.g., to produce a thick slice 2D image.
[0058] In various embodiments where components, systems and/or methods are implemented using a programmable device, such as a computer-based system or programmable logic, it should be appreciated that the above-described systems and methods can be implemented using any of various known or later developed programming languages, such as “C”, “C++”, “FORTRAN”, “Pascal”, “VHDL” and the like. Accordingly, various storage media, such as magnetic computer disks, optical disks, electronic memories and the like, can be prepared that can contain information that can direct a device, such as a computer, to implement the above-described systems and/or methods. Once an appropriate device has access to the information and programs contained on the storage media, the storage media can provide the information and programs to the device, thus enabling the device to perform functions of the systems and/or methods described herein. For example, if a computer disk containing appropriate materials, such as a source file, an object file, an executable file or the like, were provided to a computer, the computer could receive the information, appropriately configure itself and perform the functions of the various systems and methods outlined in the diagrams and flowcharts above to implement the various functions. That is, the computer could receive various portions of information from the disk relating to different elements ofthe above-described systems and/or methods, implement the individual systems and/or methods and coordinate the functions of the individual systems and/or methods described above.
[0059] In view of this disclosure it is noted that the various methods and devices described herein can be implemented in hardware, software and firmware. Further, the various methods and parameters are included by way of example only and not in any limiting sense. In view of this disclosure, those of ordinary skill in the art can implement the present teachings in determining their own techniques and needed equipment to affect these techniques, while remaining within the scope of the invention.
[0060] Although the present system may have been described with particular reference to an ultrasound imaging system, it is also envisioned that the present system can be extended to other medical imaging systems where one or more images are obtained in a systematic manner Accordingly, the present system may be used to obtain and/or record image information related to, but not limited to renal, testicular, breast, ovarian, uterine, thyroid, hepatic, lung, musculoskeletal, splenic, cardiac, arterial and vascular systems, as well as other imaging applications related to ultrasound-guided interventions. Further, the present system may also include one or more programs which may be used with conventional imaging systems so that they may provide features and advantages of the present system. Certain additional advantages and features of this disclosure may be apparent to those skilled in the art upon studying the disclosure, or may be experienced by persons employing the novel system and method of the present disclosure. Another advantage ofthe present systems and method may be that conventional medical image systems can be easily upgraded to incorporate the features and advantages of the present systems, devices, and methods.
[0061] Of course, it is to be appreciated that any one of the examples, embodiments or processes described herein may be combined with one or more other examples, embodiments and/or processes or be separated and/or performed amongst separate devices or device portions in accordance with the present systems, devices and methods.
[0062] Finally, the above-discussion is intended to be merely illustrative of the present system and should not be construed as limiting the appended claims to any particular embodiment or group of embodiments. Thus, while the present system has been described in particular detail with reference to exemplary embodiments, it should also be appreciated that numerous modifications and alternative embodiments may be devised by those having ordinary skill in the art without departing from the broader and intended spirit and scope of the present system as set forth in the claims that follow. Accordingly, the specification and drawings are to be regarded in an illustrative manner and are not intended to limit the scope of the appended claims.