Imaging tissue motion estimation
09964634 ยท 2018-05-08
Assignee
Inventors
Cpc classification
G01S7/52042
PHYSICS
G01S15/8986
PHYSICS
A61B8/485
HUMAN NECESSITIES
A61B8/5207
HUMAN NECESSITIES
International classification
Abstract
A motion processor (118) includes a motion estimator (306) that iteratively estimates a motion between a pair of consecutive frames of pre-processed echoes, wherein the motion estimator (306) generates the estimated motion based on at least on one iteration. A method includes iteratively estimating tissue motion between a pair of consecutive frames of pre-processed echoes over at least one iteration.
Claims
1. An ultrasound imaging console, comprising: a beamformer that beamforms ultrasound echoes from an ultrasound transducer array of an ultrasound imaging system to produce at least a pair of consecutive frames of sets of scan lines; a pre-processor that pre-processes each of the frames to produce a pair of pre-processed frames, including a first frame and a second subsequent frame; a processor that: compresses an envelope of each of the first and second frames; estimates a mean frequency of the compressed first frame, calculates a cross correlation between the pair of compressed frames at a set of lags; determines a phase of the cross correlation; removes glitches in the phase; unwraps the de-glitched phase; converts the unwrapped phase to a motion estimate using the mean frequency; fits a surface to the motion estimate; quantizes the fitted motion estimate to determine a subsequent set of lags; generates an elastography image with the motion estimate in response to a difference between the set of lags and the subsequent set of lags satisfying predetermined difference criteria; and visually presents the elastography image.
2. The console of claim 1, wherein the processor initializes the set of lags to zero prior to calculating the cross correlation for an initial motion estimate.
3. The console of claim 2, wherein the processor generates a subsequent motion estimate in response to the difference between the set of lags and the subsequent set of lags failing to satisfy the predetermined difference criteria.
4. The console of claim 1, wherein the processor: multiplies, element-by-element, elements in the pair of pre-processed frames, producing a first product matrix; and convolves the first product matrix with a predetermined transform to calculate the cross-correlation.
5. The motion processor of claim 4, wherein the pre-processing includes the processor applying a filter to the set of scan lines, applying the transform to the filtered sets of scan lines to create complex signals, frequency shifting the complex signals towards baseband, and decimating the frequency shifted signals to produce the pair of pre-processed frames.
6. The console of claim 5, wherein the transform is one of a Hilbert transform or a 2D rectangular function.
7. The console of claim 1, wherein the processor fits the surface with a 2D polynomial.
8. The console of claim 1, wherein the fitting of the surface at least one of smooths the motion estimate or removes false variations and noise.
9. The console of claim 1, where the subsequent set of lags includes non-zero cross-correlation lags.
10. The console of claim 9, wherein the processor generates a subsequent motion estimate using the non-zero cross-correlation lags.
11. The console of claim 1, further, comprising: receive circuitry that receives the ultrasound echoes from the ultrasound transducer array.
12. The console of claim 11, further comprising: transmit circuitry that actuates transducer elements of the transducer array to transmit ultrasound signals, wherein the ultrasound echoes are produced in response to the ultrasound signals interacting with structure.
13. A method, comprising: pre-processing each frame of a pair of consecutive frames of beamformed ultrasound echoes from an ultrasound transducer array to produce a pair of pre-processed frames, including a first frame and a second subsequent frame; iteratively estimating, via a processor, tissue motion between the first and second frames over at least one iteration by: compressing an envelope of each of the first and second frames; estimating a mean frequency of the compressed first frame, calculating a cross correlation between the pair of compressed frames at a set of lags; determining a phase of the cross correlation; removing glitches in the phase; unwrapping the de-glitched phase; converting the unwrapped phase to a motion estimate using the mean frequency; fitting a surface to the motion estimate; quantizing the fitted motion estimate to determine a subsequent set of lags; generating an elastography image with the motion estimate in response to a difference between the set of lags and the subsequent set of lags satisfying predetermined difference criteria; and displaying the elastography image.
14. The method of claim 13, wherein the set of lags equals to zero.
15. The method of claim 13, wherein the subsequent set of lags are non-zero.
16. The method of claim 13, further comprising: multiplying, element-by-element, elements in the pair of pre-processed frames to produce a first product matrix; and convolving the first product matrix with a predetermined transform to calculate the cross-correlation.
17. The method of claim 13, wherein removing glitches includes: detecting and removing transitions with predetermined lags.
18. The method of claim 13, wherein removing glitches from and unwrapping phase are performed concurrently by: differentiating the phase and the set of lags; comparing the differentiated phase to a predetermined threshold, producing a series of pulses indicating positions; integrating the series of pulses to generate a phase compensation function; and adding the phase compensation function to the phase.
19. The method of claim 13, wherein de-glitched phase is unwrapped by: differentiating the phase; comparing the differentiated phase to a predetermined threshold to determine aliasing, where is a constant less or equal to one; adding 2 to the differentiated phase every time aliasing occurs; integrating the differentiated phase; and adding a result of the integrating to the phase.
20. A non-transitory computer readable storage medium encoded with computer executable instructions, which, when executed by a processor, causes the processor to: compresses an envelope of each of first and second frames, which are frames of a pair of pre-processed consecutive frames of beamformed ultrasound echoes from an ultrasound transducer array; estimate a mean frequency of the compressed first frame, calculate a cross correlation between the pair of compressed frames at a set of lags; determine a phase of the cross correlation; remove glitches in the phase; unwrap the de-glitched phase; convert the unwrapped phase to a motion estimate using the mean frequency; fit a surface to the motion estimate; quantize the fitted motion estimate to determine a subsequent set of lags; generate an elastography image with the motion estimate in response to a difference between the set of lags and the subsequent set of lags satisfying predetermined difference criteria; and display the elastography image.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) The application is illustrated by way of example and not limitation in the figures of the accompanying drawings, in which like references indicate similar elements and in which:
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
DETAILED DESCRIPTION
(10)
(11) The console 106 includes transmit circuitry 108 that actuates the transducer elements to transmit the ultrasound signals. In one instance, this includes actuating the transducer elements to apply a different mechanical vibration to the tissue in the region of view for at least two frames for elastography applications. Receive circuitry 110 receives a set of echoes generated in response to the transmitted ultrasound signals. The echoes, generally, are a result of the interaction between the emitted ultrasound signals and the tissue (e.g., organ, tumor, etc.) in the scan field of view. For elastography, the echoes include tissue motion or displacement information between the frames of data. A beamformer 112 processes the received echoes, e.g., by applying time delays and weights to the echoes and summing the resulting echoes, producing an RF signal, or scan lines. A frame consisted of a set of scan lines that together form a 2D image of the field of view.
(12) The console 106 also includes a pre-processor 114 that pre-processes the RF signal. A non-limiting example of the pre-processor 114 is shown in
(13) Returning to
(14) A rendering engine 120 is configured to at least generate elastography or other images based on the processed frames. The elastography images can be visually presented via a display 122 and/or other display, stored, conveyed to another device, and/or otherwise utilized. A user interface (UI) 124 include one or more input devices (e.g., a mouse, keyboard, etc.), which allows for user interaction with the system 102. A controller 126 controls the various components of the imaging system 102. For example, such control may include actuating or exciting individual or groups of transducer elements of the transducer array 104 for an A-mode, B-mode, C-plane, and/or other data acquisition mode, steering and/or focusing the transmitted and/or received signal, etc.
(15) The console 106 may include one or more processors (e.g., a central processing unit (CPU), graphics processing unit (GPU), etc.) that execute one or more computer readable instructions encoded or embedded on computer readable storage medium such as physical memory and other non-transitory medium. Additional or alternatively, the instructions can be carried in a signal, carrier wave and other transitory or non-computer readable storage medium. In one instance, executing the instructions, in connection with the one or more processors, implements one or more of the beamformer 112, the pre-processor 114, the motion estimator 306, the rendering engine 120, and/or other components of the imaging system 102.
(16) As briefly discussed above, the motion processor 118 processes pairs of adjacent frames of the complex base-band signals and estimates tissue motion between the frames using a iterative approach,
(17) An envelope compressor 302 receives, as input, two adjacent frames of the complex base-band signals x.sub.i(m, n) and x.sub.j(m, n), where i and j are frame indexes of adjacent frames (e.g., frames 1 and 2, frames 2 and 3, etc.), m is a sample index along a scan line, and n is a scan line index. x.sub.i(m, n) and x.sub.j(m, n) can be processed as matrices, where m is the matrix row index and n is the matrix column index, as vectors and/or as individual elements. The envelope compressor 302 compresses the envelope, while preserving the phase, producing x.sub.ic(m, n) and x.sub.jc(m, n), where c indicates compressed, which may facilitate reducing zebra artifacts.
(18) By way of non-limiting example, the input signal is formed of complex samples: x=x.sub.r+jx.sub.i, where x.sub.r is the real (in-phase) component and x.sub.i is the imaginary (quadrature-phase) component. The compressed signals are found as: x.sub.c=x.sub.cr+jx.sub.ci,
(19)
where |x| is the envelope (magnitude) of the signal, and can be found as
(20)
In one instance, a constant one (1) is added to prevent division by zero. The built-in instructions of the CPU and/or GPU can be used to compute the square root.
(21) A frequency estimator 304 receives, as input, the compressed frame x.sub.ic(m, n) and estimates a mean frequency f(m), e.g., for every depth, based thereon. The attenuation of ultrasound is frequency-dependent, hence the mean (center) frequency changes as a function of the depth. The resulting center frequency is a function of the center frequency of transmitted pulse, the impulse response of the transducer, the frequency-dependent attenuation, and the properties of the sliding filter. Scanners use different transmit pulses and sliding filters depending on the application. Furthermore the frequency dependent attenuation is a function of the tissue being scanned.
(22) Generally, every time a new frame is input, a mean frequency f(m) is estimated to match the scanned tissue and system setup. The input is the complex signal discrete x.sub.ic(m, n), which represents a matrix, where the index m is in axial direction (depth), and the index n is in lateral direction. The estimation of the frequency is done from the phase of the lag-1 complex autocorrelation function
(23)
where R.sub.ii(m, n; 1, 0) is the complex autocorrelation at pixel/position (m, n), and lag 1 in axial direction (along m) and lag 0 in lateral direction (along n).
(24) An averaging window consists of V rows (samples) and W columns (lines), and the superscript * denotes complex conjugation. The autocorrelation function is complex: R.sub.ii(m, n; 1,0)=R.sub.iir(m, n; 1,0)+jR.sub.iii(m, n; 1,0). The instantaneous frequency at pixel (m, n) is estimated as
(25)
The phase is unwrapped using a standard or other unwrapping procedure (an example of which is described below in connection with
(26)
The estimates are noisy and vary depending on the strength of the signal (speckle). To get smooth estimate of f, the estimates
(27)
are first averaged and then f(m) is found as a nth-order (e.g., 2.sup.nd 3.sup.rd, etc.) polynomial is obtained via least squared error fitting.
(28)
(29) Returning to
(30) A surface fitter 308 fits a surface to the motion estimate u.sub.l(m, n), which reduces variations and removes outliers and thus noisy motion estimates, or smooths the motion estimate. In one instance, the surface is fitted using least mean squares 2D polynomial fitting, where the polynomial coefficients are: P=Gu.sub.l, where G is a matrix derived from solving a least-squares fit for a 3.sup.rd order surface. Other orders (e.g., nth order) of the polynomial maybe used, depending on the tradeoff between precision and computational requirement. Here, u.sub.l is the matrix with the estimates of the motion. The surface can be found as u.sub.fit(m, n)=P(0)+P(1)n+P(2)m+P(3)n.sup.2+P(4)mn+P(5)m.sup.2+P(6)n.sup.3+P(7)n.sup.2m+P(8)nm.sup.2+P(9)m.sup.3. The surface fitter 210 may also identify a smooth edge between the different lags, which facilitates the phase-unwrapping discussed below.
(31) A quantizer 310 quantizes the fitted motion estimate and find lags k(m, n) at which the cross-correlation between the two frames will give a more precise estimate than u.sub.l(m, n). By way of example, the fitted surface can be quantized to give the lags where to calculate the cross correlation at next iteration:
(32)
A decision component 212 receives, as an input, the current lags and the new lags and determines whether the motion estimate will be refined through an iteration or a final motion estimate u.sub.l(m, n) will be output based on various stopping criteria. Such criteria may include, but is not limited to, a predetermined number of iterations (e.g., l=1, or one iteration and two motion estimates), a predetermined difference between the current and new lags, a predetermined time interval, on demand based on a user input, and/or other stopping criteria.
(33) As briefly discussed above, the motion estimator 306 estimates the motion u.sub.l(m, n).
(34) With
(35)
A multiplier 502 multiplies the two input matrices x.sub.ic(m, n) and x.sub.jc(m, n), element by element, producing P.sub.ij(m, n). A convolver 504 convolves P.sub.ij(m, n) with an averaging window H(v, w). If the averaging window H(v, w) is a 2D rectangular function, then the convolution is reduced to four (4) additions and two (2) subtractions per sample recursive implementation of a running average. A phase shift determiner 506 determines a phase shift at pixel (m, n) as
(36)
An unwrapper 508 unwraps the phase using the above approach wherein =unwrap() and/or other approach. A phase converter 510 converts the phase to the initial motion estimate as
(37)
(38) Turning to
(39)
In
(40) Once the cross correlation is found, the phase shift determiner 506 determines a phase of each cross correlation as:
(41)
A glitch remover 604 removes the glitches by detecting and removing transitions in the matrix with lags K whose elements are k(m, n): .sub.tmp2=unwrap_with_lags(.sub.tmp1, K). Note that at borders, where transitions occur in k(m, n), there are steps in the phase estimation which are not due to aliasing and are not caught by the unwrapping algorithm. The phase unwrapper 508 unwraps the phase using the unwrapping approach described herein and/or other approach. When the transition in lags is at the wrong pixel, a phase wrapping occurs although the lags are the same. As such, the unwrapping algorithm discussed herein can be applied: .sub.1=unwrap(.sub.tmp2). The phase to distance converter 510 determines the motion as
(42)
(43)
(44)
(45) Note that the ordering of the following acts is for explanatory purposes and is not limiting. As such, one or more of the acts can be performed in a different order, including, but not limited to, concurrently. Furthermore, one or more of the acts may be omitted and/or one or more other acts may be added.
(46) Generally, the initial motion estimate (l=0) is a particular case of the subsequent motion estimates (l>0) with all lags k(m, n) set to zero (0). The data for the initial and subsequent motion estimates can be processed by a same or different module.
(47) At 800, k(m, n), R.sub.ij(m, n), (m, n), u.sub.l(m, n), and l are initialized to zero (0).
(48) At 802, f(m) is determined as described herein.
(49) At 804, R.sub.ij(m, n; k.sub.l(m, n)) is determined as described herein.
(50) At 806, (m, n) is determined as described herein.
(51) At 808, (m, n) is de-glitched as described herein
(52) At 810, (m, n) is unwrapped as described herein.
(53) At 812, u.sub.l(m, n) is determined as described herein.
(54) At 814, the surface u.sub.fit(m, n) is fit to u.sub.l(m, n).
(55) At 816, new lags k.sub.l+1(m, n) are determined as described herein.
(56) At 818, stopping criteria is checked. In this example, the stopping criteria is based on an absolute difference between the lags k.sub.l(m, n) and the new lags k.sub.l+1(m, n), as described herein.
(57) At 820, if the stopping criteria is not satisfied, k(m, n) is set to the new lag k.sub.l+1(m, n), acts 804-816 are repeated.
(58) At 822, if the stopping criteria is satisfied, u.sub.l(m, n) is output as the final motion estimate. The final motion estimate can be used to generate elastography and/or other images.
(59) The above may be implemented by way of computer readable instructions, encoded or embedded on computer readable storage medium, which, when executed by a computer processor(s), cause the processor(s) to carry out the described acts. Additionally or alternatively, at least one of the computer readable instructions is carried by a signal, carrier wave or other transitory medium.
(60) The application has been described with reference to various embodiments. Modifications and alterations will occur to others upon reading the application. It is intended that the invention be construed as including all such modifications and alterations, including insofar as they come within the scope of the appended claims and the equivalents thereof.