High volume rate 3D ultrasonic diagnostic imaging
11596383 · 2023-03-07
Assignee
Inventors
- David Prater (Andover, MA, US)
- Stephen P. Watkins (Windham, NH, US)
- William Robert Martin (Westford, MA, US)
Cpc classification
A61B8/463
HUMAN NECESSITIES
G01S7/52085
PHYSICS
A61B8/543
HUMAN NECESSITIES
A61B8/5215
HUMAN NECESSITIES
A61B8/4483
HUMAN NECESSITIES
G01S15/8925
PHYSICS
A61B8/483
HUMAN NECESSITIES
G01S7/5208
PHYSICS
International classification
A61B8/00
HUMAN NECESSITIES
Abstract
A 3D ultrasonic diagnostic imaging system produces 3D display images at a 3D frame rate of display which is equal to the acquisition rate of a 3D image dataset. The volumetric region being imaged is sparsely sub-sampled by separated scanning beams. Spatial locations between the beams are filled in with interpolated values or interleaved with acquired data values from other 3D scanning intervals depending upon the existence of motion in the image field. A plurality of different beam scanning patterns are used, different ones of which have different spatial locations where beams are located and beams are omitted. In a preferred embodiment the determination of motion and the consequent decision to use interpolated or interleaved data for display is determined on a pixel-by-pixel basis.
Claims
1. A computer-readable, non-transitory medium storing software code representing instructions that, when executed by a computing system, cause the computing system to perform a method of displaying three-dimensional (3D) image data at a high frame rate, the method comprising: receiving at least two datasets of a volumetric region comprising moving tissue, wherein each dataset is obtained using spatially different scanline patterns at different acquisition times during tissue movement and comprise image data corresponding to sampled locales within the volume; attributing first data values to image data of a first dataset and second data values to image data of a second dataset, wherein the first and second data values correspond to respective sampled locales of the first and second data sets; generating interpolated data representing the volumetric region between the sampled locales of the first data set; attributing values to the interpolated data, thereby generating interpolated data values; comparing the second data values to the interpolated data values; and generating an image dataset, wherein the image dataset comprises image data of the first dataset and either: interpolated data when a first interpolated data value is substantially different from a second data value at the same sampled locale; or a portion of data from the second dataset when a first interpolated data value is substantially the same as a second data value at the same sampled locale.
2. The computer-readable, non-transitory medium of claim 1, wherein the image data is ultrasound image data.
3. The computer-readable, non-transitory medium of claim 1, wherein the method performed further comprises displaying the image dataset.
4. The computer-readable, non-transitory medium of claim 1, wherein the portion of data from the second dataset corresponds to the volumetric region between the sampled locales of the first data set.
5. A method for generating a three-dimensional (3D) image, the method comprising: receiving at least two datasets of a volumetric region comprising moving tissue, wherein each dataset is obtained using spatially different scanline patterns at different acquisition times during tissue movement and comprise image data corresponding to sampled locales within the volume; attributing first data values to image data of a first dataset and second data values to image data of a second dataset, wherein the first and second data values correspond to respective sampled locales of the first and second data sets; generating interpolated data representing the volumetric region between the sampled locales of the first data set; attributing values to the interpolated data, thereby generating interpolated data values; comparing the second data values to the interpolated data values; and generating an image dataset, wherein the image dataset comprises image data of the first dataset and either: interpolated data when a first interpolated data value is substantially different from a second data value at the same sampled locale; or a portion of data from the second dataset when a first interpolated data value is substantially the same as a second data value at the same sampled locale.
6. The method of claim 5, wherein the image data is ultrasound image data.
7. The method of claim 5, wherein the method performed further comprises displaying the image dataset.
8. The method of claim 5, wherein the portion of data from the second dataset corresponds to the volumetric region between the sampled locales of the first data set.
Description
(1) In the drawings:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9) Referring first to
(10) The receive beams formed by the beamformer 18 are coupled to a signal processor which performs functions such as filtering and quadrature demodulation. The echo signals of the processed receive beams are coupled to a Doppler processor 30 and/or a B mode processor 24. The Doppler processor 30 processes the echo information into Doppler power or velocity information. For B mode imaging the receive beam echoes are envelope detected and the signals logarithmically compressed to a suitable dynamic range by the B mode processor 24. The echo signals from the volumetric region are processed to form a 3D image dataset by a 3D image processor as described more fully below. The 3D image data may be processed for display in several ways. One way is to produce multiple 2D planes of the volume. This is described in U.S. Pat. No. 6,443,896 (Detmer). Such planar images of a volumetric region are produced by a multi-planar reformatter 34. The three dimensional image data may also be rendered to form a perspective or kinetic parallax 3D display by a volume renderer 36. The resulting images, which may be B mode, Doppler or both as described in U.S. Pat. No. 5,720,291 (Schwartz), are coupled to a display processor 38, from which they are displayed on an image display 40. User control of the beamformer controller 22 and other functions of the ultrasound system are provided through a user interface or control panel 20.
(11) In order to provide 3D images which are highly resolved and free of sampling artifacts, the volumetric region being imaged must be spatially sampled with a beam density that satisfies the Nyquist criterion, as explained in U.S. patent application publication no. 2007/0123110 (Schwartz). Some implementations of the present invention will spatially sample a volume near or below the threshold of this spatial sampling criterion. The 3D data from such low density scanning of a volumetric region is referred to herein as 3D sub-sampled volume data. The 3D sub-sampled volume data may be B mode data, Doppler data, or a combination of the two. Such 3D sub-sampled volume datasets are coupled from the B mode processor 24 and/or the Doppler processor 30 to a memory device 50 as shown in
(12) In accordance with a further aspect of the present invention, the ultrasonic imaging system determines whether to display an interpolated 3D dataset SSV.sub.I, or a 3D dataset which is an interleaved combination of two or more 3D datasets. The ultrasound system does this by determining which dataset will produce the highest quality image. If the region being imaged is moving such as a rapidly beating heart, or the probe is being moved as the datasets are acquired, the motion will affect the image quality. The time interval between the acquisitions of two spatially different image data points in the same sub-sampled volume will be less than the time interval between two spatially adjacent image points acquired in different sub-sampled volume acquisitions. This means that interpolated display values between samples in the same sub-sampled volume will be less affected by motion than will spatially adjacent samples from two different sub-sampled volumes because the data values used for the interpolation will be acquired more closely in time. The samples from different, even consecutive, sub-sampled volumes will be more widely separated in time and hence more susceptible to motion distortion. Comparator 54 in
(13) The Select signal from the comparator 54 is coupled to a processor which selects either the interpolated sub-volume SSV.sub.I when motion is present, or interleaves the earlier acquired data points (SSV.sub.D-1) with the recently acquired data points (SSV.sub.D). The selected 3D image dataset is forwarded on for subsequent processing (e.g., volume rendering, display processing) and display.
(14)
(15)
(16) In the other sampling patterns P2-P4, different spatial locations are sub-sampled. In P2 the fourth and second locations in successive rows are sampled. In P3 the third and first locations in successive rows are sampled. And in pattern P4 the second and fourth locations in successive rows are sampled. After the volumetric region has been scanned with these four patterns, each producing its own sub-sampled volume (SSV), it is seen that all spatial locations have been sampled once. The sequence of scanning patterns then repeats with subsequent scans of the volumetric region. It can also be seen that if the samples from the four patterns are interleaved or merged together, a fully sampled volume is produced. Interleaving the samples from all four patterns will produce one pattern in which all sixteen spatial locations comprises sampled (acquired) values, albeit acquired over four volume scan intervals. If there were no motion in the volume, the interleaving of the samples from the four patterns will produce a well resolved and undistorted volume image. When the sequence repeats, the next pattern which is scanned, a repeat of pattern P1, produces samples which are used to replace the samples from the earlier scan with pattern P1. In this way a portion (one-quarter in this example) of the volumetric data is updated with the new scan with each different pattern. After the four scans with the four patterns have been repeated, all of the sample values of the interleaved volume have been updated.
(17) But if there is motion in the volumetric region during the time required to scan with the four patterns, an interleave of the four patterns will produce a poorly resolved or distorted image dataset. This problem is prevented by interleaving fewer than all four scans and filling in unsampled spatial locations with interpolated values. At a minimum, only a single pattern dataset is used with missing samples filled in by interpolation.
O.sub.1=AVG{X.sub.1,X.sub.3}
The interpolated O.sub.1 value is then used with the value of X.sub.2 to compute a value for O.sub.2 by
O.sub.2=AVG{O.sub.1,X.sub.2}
Similarly, X.sub.2 and X.sub.4 are used to compute a value for O.sub.4 by
O.sub.4=AVG{X.sub.2,X.sub.4}
and X.sub.3 and O.sub.4 are used to compute a value for O.sub.3 by
O.sub.3=AVG{X.sub.3,O.sub.4}
The other missing values in the matrix of values are similarly filled in by interpolation and/or extrapolation.
(18) The missing values in the P2 pattern on the right side of the drawing are likewise filled in by interpolation. X.sub.1 and X.sub.3 are used to compute a value for O.sub.1; O.sub.1 and X.sub.2 are used to compute a value for O.sub.2; X.sub.2 and X.sub.4 are used to compute a value for O.sub.3; and O.sub.3 and X.sub.3 are used to compute a value for O.sub.4.
(19) When deciding whether to use an interpolated sub-sampled volume or an interleaved sub-sampled volume, a comparison is made of an actually acquired value and an interpolated value at the same spatial location. For instance, the interpolated O.sub.2 value of the 3D dataset produced from pattern P1 is compared with the acquired value X.sub.2 of the 3D dataset produced from pattern P2. If the values are substantially the same, this indicates that there has been no significant motion between the acquisition times of the two 3D datasets. Thus, the actually acquired samples X.sub.1-X.sub.4 of the pattern P2 dataset can be interleaved with the data values of the 3D dataset of pattern P1. In this example this is done by using the X.sub.1 value of P2 for the value of O.sub.0 in P1; using the X.sub.2 value of P2 for the value of O.sub.2 in P1; using the X.sub.3 value of P2 for the value of O.sub.5 in P1; and using the X.sub.4 value of P2 for the value of O.sub.6 in P1. Other acquired values from other 3D datasets acquired with the other patterns can be similarly interleaved if there has been no motion between the source and destination 3D datasets.
(20) On the other hand, if the comparison of X.sub.2 of the P2 dataset with the interpolated O.sub.2 value of the P1 dataset shows a significant difference, then there has been motion between the times of acquisition of the two 3D datasets. In that case the P1 dataset with all “O” values being interpolated and/or extrapolated values would be used for display to minimize distortion and blurring in the 3D image.
(21) In a constructed embodiment of the present invention, the decision of whether to use interpolated or interleaved data for the 3D display is not done on a global basis for the entire image, but on a pixel-by-pixel basis. A given ultrasound image may be expected to have motion in only a portion or certain regions of the image, and not over the entire image. For instance, if 3D imaging is being done of a fetus and the fetus is stationary during the time of imaging, most of the regions of the fetus in the display are not moving from one 3D frame to the next. Accordingly, the display points from these stationary regions, when compared, would indicate that display points can be interleaved from multiple 3D scans to produce a highly resolved image of those areas in the display. The fetal heart, however, is constantly beating and a comparison of display points from temporally discrete scans would indicate motion of the display points of the fetal heart. Thus, interpolation would be used to display the fetal heart region in an image, as the acquired data being used would all be from the same 3D scan and not from multiple, temporally discrete scans. The fetal heart would thus appear at its best quality, undistorted by motional effects, while the rest of the 3D image would be interleaved acquired data points from multiple successive scans. Each region of the 3D display is thereby optimized for the best image quality of display by determining on a display point-by-display point basis whether to use interpolated or interleaved display data at each point in the displayed volume.
(22) When the 3D display technique of the present invention is being used to image an organ with repetitive motion, such as the beating of the heart, the scan patterns can be either synchronous or asynchronous with respect to the motional cycle, in the case of the heart, the heartbeat. Asynchronous and synchronous scan patterns are illustrated in
(23)
(24) With each of the acquisition sequences of
(25)