Rib blockage delineation in anatomically intelligent echocardiography
11622743 · 2023-04-11
Assignee
Inventors
- Sheng-Wen Huang (Ossining, NY, US)
- Emil George Radulescu (Ossining, NY)
- Ramon Quido Erkamp (Yorktown Heights, NY, US)
- Shougang Wang (Ossining, NY, US)
- Karl Erhard Thiele (Andover, MA, US)
- David Prater (Andover, MA, US)
Cpc classification
A61B8/463
HUMAN NECESSITIES
G01S7/52085
PHYSICS
A61B6/5205
HUMAN NECESSITIES
G01S15/8927
PHYSICS
A61B8/085
HUMAN NECESSITIES
G01S15/8977
PHYSICS
A61B8/483
HUMAN NECESSITIES
International classification
A61B6/00
HUMAN NECESSITIES
A61B8/00
HUMAN NECESSITIES
Abstract
A method for using an interactive visual guidance tool for an imaging acquisition and display configured for user navigation with respect to a blockage of a field of view detects, and spatially defines, the blockage. It also integrates, with the image for joint visualization, an indicium that visually represents the definition. The indicium is moved dynamically according to movement, relative to the blockage, of the field of view. The indicium can be shaped like a line segment, or two indicia can be joined in a “V” shape to frame a region of non-blockage. The defining may be based on determining whether ultrasound beams in respective directions are blocked.
Claims
1. A method for visualizing a blockage region on an ultrasound image, comprising the steps of: obtaining from a transducer array of an ultrasound probe ultrasound data representative of an anatomy within a field of view; receiving the ultrasound data at a processor via a plurality of data channels in communication with the ultrasound probe; applying delays to the received ultrasound data; generating an ultrasound image of the field of view including the anatomy within the field of view based on the ultrasound data; detecting and spatially defining a blockage region of the ultrasound image comprising a portion of the field of view and associated with an anatomical structure within the anatomy based on the ultrasound data and by computing a metric of similarity for the ultrasound data to which the delays have been applied; generating a graphical representation based on the detection and spatial definition of the blockage region; integrating the graphical representation with the ultrasound image to form a joint visualization comprising the graphical representation overlaid on the anatomy in the ultrasound image to identify the portion of the field of view corresponding to the blockage region in the ultrasound image; automatically outputting the joint visualization, to a display in communication with the processor; and moving the joint visualization dynamically in response to a movement of the field of view relative to the blockage.
2. The method of claim 1, wherein the obtaining step further comprises activating a plurality of different apertures, dividing the apertures among the plurality of data channels; and beamsumming the ultrasound data obtained from the different apertures, wherein the metric of similarity is representative of a correlation between the beamsummed data of the different apertures.
3. The method of claim 2, wherein at least a portion of the apertures are interleaving complementary apertures.
4. The method of claim 1, wherein the detecting and spatially defining a blockage region step further comprises determining whether an ultrasound beam having a beam direction is blocked.
5. The method of claim 4, wherein the detecting and spatially defining a blockage region step further comprises determining whether the ultrasound beam is blocked in the beam direction by computing a metric of coherence of a portion of the ultrasound data associated with the beam direction.
6. The method of claim 1, wherein the moving the joint visualization step further comprises the steps of: generating a plurality of ultrasound images of the anatomy during the movement of the field of view, based on the ultrasound data; generating a plurality of graphical representations to respectively indicate the blockage region for the plurality of ultrasound images; integrating the plurality of graphical representations with the plurality of ultrasound images to form plurality of j oint visualizations; and outputting the plurality of joint visualizations in response to the movement of the field of view.
7. A method for generating interleaving complementary imaging apertures, comprising the steps of: generating with an ultrasound probe a plurality of interleaving complementary imaging apertures; forming a correlation map based on data received by channels of the plurality of interleaving complementary imaging aperatures of the generating step; setting a blockage boundary line using the processor, based on values of the correlation map of the forming step; acquiring an ultrasound image in a field of view for display with the blockage boundary line shown as an overlay; updating the ultrasound image in response to movement of the field of view; determining whether to generate additional interleaving complementary imaging apertures; based on the determining step, generating additional interleaving complementary imaging apertures; and repeating the generating, forming, setting, and acquiring steps.
8. The method of claim 7, wherein the forming step further comprises delaying and beamsumming imaging data of the data received by the interleaving complementary imaging apertures, and correlating the beamsummed data of one of the interleaving complementary imaging apertures to another of the interleaving complementary imaging apertures.
9. The method of claim 7, wherein the updating step further comprises updating an indicator, correlated to the blocking boundary line, of a fraction of sampled locations within the field of view that have valid imaging data in the ultrasound image.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION OF EMBODIMENTS
(9)
(10)
(11) Coherence of channel data is used to detect blockage. Each channel delivers its respective radiofrequency data magnitude associated with its respective fixed transducer element or patch of elements. As ultrasound echoes return, their incident pressures on the elements are sampled quickly and periodically. The samples are delayed with respect to each other according to the line-of-sight travel time geometry of the field point being evaluated. Here, “coherence” means similarity among data recorded by different channels of an array after applying the above-mentioned receiving focusing delays.
(12) One gauge of coherence is a beamsummed-data-based coherence estimation method, such as the one described in U.S. Patent Publication No. 2009/0141957 to Yen et al., the entire disclosure of which is incorporated herein by reference.
(13) The estimation method can be tailored to detecting rib and lung blockage, and is demonstrated below using the two beamformers 154, 156. Let s.sub.j(r, θ) denote the (real-valued) channel data at depth r along the receive beam in the direction θ, that data received by the j-th channel after applying the focusing delay, and let C.sub.1 and C.sub.2 denote the set of channels used in the first and the second beamformer 154, 156, respectively. The output of the k-th (k=1, 2) beamformer is b.sub.k(r, θ), the formula for which is shown in
(14) A flow diagram for the algorithm is shown in
(15) In a specific example, the data is acquired at 32 MHz sampling rate in a pulse-inversion mode using a probe having 80 transducer elements. Each frame has 44 beams and the beam density is 0.4944 beam/degree. The center frequency is 1.3 and 2.6 MHz on transmit and on receive, respectively. C.sub.1={20-22, 26-28, 32-34, 38-40, 44-46, 50-52, 56-58} and C.sub.2={23-25, 29-31, 35-37, 41-43, 47-49, 53-55, 59-61}. The weighting function w used in the correlator is a 51 (axially or in the r direction) by 1 (laterally or in the θ direction) boxcar and the smoothing filter is a 501 by 3 boxcar.
(16) Due to the periodic structure of the apertures, sensitivity of the correlation coefficient ρ to off-axis signals varies periodically with the direction of off-axis signals.
(17) This periodicity can be alleviated by randomizing sub-aperture sizes while still keeping both apertures complementary. In other words, the channels are randomly divided among the apertures.
(18) An example of random complementary apertures is C.sub.1={21-22, 26-28, 30-31, 35, 37, 39, 41-45, 49, 51, 53, 57, 60-61} and C.sub.2={20, 23-25, 29, 32-34, 36, 38, 40, 46-48, 50, 52, 54-56, 58-59}.
(19) To verify whether a beam, and thus its direction, is blocked, a count is made of the number of points with a correlation coefficient ({circumflex over (ρ)}) higher than 0.55 between 72 and 180 mm in depth. If at least 400 points (at 32 MHz sampling rate) in a beam have high coherence, this beam is considered penetrating into tissue. Otherwise it is considered blocked by a rib.
(20) The upper bound of the depth range is not critical. 72 mm, much larger than the depth of human ribs in general, can be chosen as the lower bound because high coherence or correlation coefficient values might be present in regions right below a rib due to multiple reflections (or reverberation) and such reflections tend to fade away with depth.
(21) The apertures described do not include channels in both ends of the full aperture. Though apertures can be extended to include those channels, the number of blocked beams might be underestimated if large apertures are used. This is because the correlation coefficient of complementary aperture outputs could still be high if part of the large complementary apertures is not blocked.
(22)
(23)
(24)
(25)
(26) As an alternative to line-by-line processing of the correlation map {circumflex over (ρ)}(r, θ), an image can be derived from the map by image or morphological processing such as dilation/erosion that rejects outliers and/or suppresses error. After processing, the indicia 244, 248 are the lines that frame the output region. They will inherently indicate onscreen, to the user, beam directions that lie outside the indicia, i.e., in which beams are blocked as presently evidenced by lack of imaging data coherence along the beam.
(27)
(28) An alternative to the above-described coherence estimation is use of a coherence factor calculated by processing complex-valued channel data. The coherence factor is defined as
(29)
where r is the depth along beam direction, θ is the beam direction, S.sub.j(r, θ) is the complex-valued channel data at depth r received by the j-th channel after applying the focusing delay, and N is the number of channels. The term
(30)
in the numerator represents an image as a function of r and θ after coherent beamforming but before scan conversion and logarithmic compression. No more than a single beamformer is required. CF(r, θ) substitutes for the correlation coefficient ρ(r, θ) in the above-discussed mapping and indicium determination.
(31) In the above-described embodiments, it is assumed that continuous ultrasound acquisition is accompanied with continuous update of the onscreen display. Alternatively, the onscreen display can be updated only when the field of view 216 changes. Thus, the updating can be responsive to probe movement detectable by an integrated electromagnetic (EM) sensor, as well as to image plane movement such as rotation. An example of such an EM sensor is seen in commonly-owned U.S. Pat. No. 7,933,007 to Stanton et al., the entire disclosure of which is incorporated herein by reference.
(32) An interactive visual guidance tool for an imaging acquisition and display system and configured for user navigation with respect to a blockage of a field of view detects, and spatially defines, the blockage. It also integrates, with the image for joint visualization, an indicium that visually represents the definition. The indicium is moved dynamically according to movement, relative to the blockage, of the field of view. The indicium can be shaped like a line segment, or two indicia can be joined in a “V” shape to frame a region of non-blockage. The defining may be based on determining whether ultrasound beams in respective directions are blocked. Included, for deriving the image, in some embodiments are imaging channels for receiving image data for which a metric of coherence, i.e., similarity among channel data, is computed. The determination for a direction is based on the metric for locations in that direction. One application is navigating an ultrasound probe between blocking ribs to achieve a standard cardiac view.
(33) In addition to making diagnostic cardiac examination performable by nurses or other clinicians who may be untrained specifically in sonography, the interactive visual guidance tool 108 can guide novice sonographers. The tool 108 can feature, for this purpose or this mode, a regular (grayscale) sonogram, along with the visual feedback described herein above. Alternatively, the novel visual feedback of the tool 108 can speed up the work flow of trained or experienced sonographers. The ultrasound technician interactive guidance apparatus 100, which includes the tool 108, may encompass a more comprehensive interactive visual guidance system such as that disclosed in commonly-assigned patent application entitled “Anatomically Intelligent Echocardiography for Point-of-Care” to Radulescu et al.
(34) While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive; the invention is not limited to the disclosed embodiments.
(35) For example, instead of hash marks on the “V”, outwardly pointing arrows may be employed.
(36) Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the appended claims. In the claims, the word “comprising” does not exclude other elements or steps, and the indefinite article “a” or “an” does not exclude a plurality. Any reference signs in the claims should not be construed as limiting the scope.
(37) A computer program can be stored momentarily, temporarily or for a longer period of time on a suitable computer-readable medium, such as an optical storage medium or a solid-state medium. Such a medium is non-transitory only in the sense of not being a transitory, propagating signal, but includes other forms of computer-readable media such as register memory, processor cache, RAM and other volatile memory.
(38) A single processor or other unit may fulfill the functions of several items recited in the claims. The mere fact that certain measures are recited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage.