Real-time image analysis for vessel detection and blood flow differentiation
20230225702 · 2023-07-20
Inventors
Cpc classification
A61B8/5246
HUMAN NECESSITIES
A61B8/085
HUMAN NECESSITIES
International classification
Abstract
This invention discloses an image analysis system and method, which detects blood vessels in ultrasound structural B-mode images using deep learning and identifies blood vessel type (vein or artery) based on automatic Doppler spectrogram features analysis. Such an automatic solution is important for successful catheter insertion under ultrasound guidance or other procedures which requires differentiation between the arteries or the veins or quantitative characterization of blood flow. The system contains: an ultrasound scanner with implemented B-mode and PW mode equipped with probe and algorithms implemented as software modules in the ultrasound scanner: 1) for automatic vessel tracking in real-time based on deep learning and, 2) algorithms for Doppler spectrogram quality assessment and parameterization by using quantitative spectrogram features. The system detects and classifies scanned vessels according to blood flow into: 1) arteries, or 2) veins.
Claims
1. A method for real-time image analysis of a series of brightness mode and doppler ultrasound images of a tissue sample, comprising: detecting vessels from the series of brightness mode images using a deep learning algorithm that is trained for vessel detection and returning location and size of a bounding box of a detected vessel; and further comprising for each detected vessel from the series of brightness mode images: parameterizing for pulse wave Doppler gate placement using location and size of the bounding box of the detected vessel; scanning the tissue sample using the Doppler gate parameterization and using the scanning data to produce a time-frequency domain Doppler spectrogram; assessing the quality of the Doppler spectrogram using a first trained machine learning classifier algorithm and repeating the parameterization and scanning if the spectrogram quality is classified as insufficient; passing the Doppler spectrogram of sufficient quality to a classification module; classifying the vessel as an artery or a vein using a second trained machine learning classifier of the classification module; and outputting the brightness mode image masked with an indication of vessel location and classification of the vessel.
2. The method of claim 1, wherein the deep learning algorithm that is trained for vessel detection is configured to process at least 50 brightness mode image frames per second.
3. The method of claim 2, wherein assessing the quality of the time-frequency domain Doppler spectrogram using a first trained machine learning classifier comprises: classifying each pixel as either blood flow related data or noise based on pixel intensity; minimizing an intra-class variance by evaluating a weighted sum of variances of the two classes; extracting pixel intensities; evaluating a first and second parameters for the proportion of blood flow related pixels in comparison to background, wherein the first parameter is a ratio between the blood flow related pixels and the total number of pixels in the spectrogram, and the second parameter is a ratio between a sum of the blood flow related pixel intensities and a sum of all pixel intensities of the spectrogram; combining the first and second parameters into a feature vector; and classifying the spectrogram as either of sufficient quality or of insufficient quality by evaluating the feature vector in a trained machine learning algorithm.
4. The method of claim 3, wherein classifying the vessel comprises: parameterizing the Doppler spectrogram, wherein parameterizing comprises evaluation of statistical quantities: mean velocity from the time-frequency domain Doppler spectrogram, skewness of the mean velocity versus time curve, maximum peak of a windowed half of an autocorrelation function, skewness of the half of the autocorrelation function, and the Hjorth parameter of signal complexity; and combination of the statistical quantities into a feature function; and classifying the vessel as an artery or a vein by evaluating the feature function in the second trained machine learning classifier.
5. The method of claim 3, wherein classifying the vessel comprises evaluating the time-frequency domain Doppler spectrogram in the second trained machine learning classifier, wherein the second trained machine learning classifier is a trained convolutional neural network.
6. The method of claim 3, further comprising displaying the Doppler spectrogram.
7. A system for real-time image analysis of a series of brightness mode and doppler ultrasound images of a tissue sample comprising an ultrasound probe and an ultrasound scanner, wherein the ultrasound scanner comprises a display monitor and one or more computer processors configured to execute one or more computer program products, the computer program products being tangibly embodied on a non-transitory computer-readable medium and comprising executable code for: receiving a series of ultrasound signals at the one or more computer processors; producing a series of brightness mode images from the series of ultrasound signals; detecting vessels from the series of brightness mode images using a deep learning algorithm that is trained for vessel detection and returning location and size of a bounding box of a detected vessel; and further comprising for each detected vessel from the series of brightness mode images: parameterizing for pulse wave Doppler gate placement using location and size of the bounding box of the detected vessel; scanning the tissue sample using the Doppler gate parameterization and using the scanning data to produce a time-frequency domain Doppler spectrogram; assessing the quality of the Doppler spectrogram using a first trained machine learning classifier algorithm and repeating the parameterization and scanning if the spectrogram quality is classified as insufficient; passing the Doppler spectrogram of sufficient quality to a classification module; classifying the vessel as an artery or a vein using a second trained machine learning classifier of the classification module; and outputting the brightness image masked with an indication of vessel location and classification of the vessel.
8. The system of claim 7, wherein the one or more computer processors are embedded in the ultrasound scanner and/or in a personal computer that is connected to the ultrasound scanner.
9. The system of claim 7, wherein the deep learning algorithm that is trained for vessel detection is configured to process at least 50 brightness mode image frames per second.
10. The system of claim 7, wherein assessing the quality of the time-frequency domain Doppler spectrogram using a first trained machine learning classifier comprises: classifying each pixel as either blood flow related data or noise based on pixel intensity; minimizing an intra-class variance by evaluating a weighted sum of variances of the two classes; extracting pixel intensities; evaluating two parameters for the proportion of blood flow related pixels in comparison to background, wherein a first parameter is a ratio between the blood flow related pixels and the total number of pixels in the spectrogram, and a second parameter is a ratio between a sum of the blood flow related pixel intensities and a sum of all pixel intensities of the spectrogram; combining the first and second parameters into a feature vector; and classifying the spectrogram as either of sufficient quality or of insufficient quality by evaluating the feature vector in a trained machine learning algorithm.
11. The system of claim 10, wherein classifying the vessel comprises: parameterizing the Doppler spectrogram, wherein parameterizing comprises evaluation of statistical quantities: mean velocity from the time-frequency domain Doppler spectrogram, skewness of the mean velocity versus time curve, maximum peak of a windowed half of an autocorrelation function, skewness of the half of autocorrelation function, and the Hjorth parameter of signal complexity; and combination of the statistical quantities into a feature function; and classifying the vessel as an artery or a vein by evaluating the feature function in the second trained machine learning classifier.
12. The system of claim 10, wherein classifying the vessel comprises evaluating the time-frequency domain Doppler spectrogram in the second trained machine learning classifier, wherein the second trained machine learning classifier is a trained convolutional neural network.
13. The system of claim 7, further comprising displaying the Doppler spectrogram.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0010] The invention can be best understood by referring to the drawings, which depict preferred embodiments of the present invention.
[0011]
[0012]
[0013]
[0014]
[0015]
[0016]
[0017] The presented figures are for illustration and the scale, the proportions, and the other aspects do not necessarily correspond to the actual technical solution.
DETAILED DESCRIPTION OF THE INVENTION
[0018] The present invention is best described by its preferred embodiments, which are exemplified by the figures. According to the schematic block diagram of
[0019] In a preferred embodiment, the image analysis system 100 as described above, is configured to execute the following procedure: an operator slowly moves the ultrasound probe 102 coupled to the tissue surface 108 and obtains one or more B-mode images, which represent a vessel or few vessels that could be an artery 106 or a vein 104. The scanning plane could be a transverse view (as illustrated in
[0020] The vessel classification module 122 could be implemented in two ways: by calculating a set of statistical quantities and inputting said statistical quantities into a trained ML algorithm, or by passing the obtained spectrogram as an image into a trained convolutional neural network. Statistical quantities such as the periodicity parameter of the envelope of the spectrogram are used in the first embodiment. The module formulates classifications of artery or vein, the result of which is overlaid with the B-mode image and displayed on the monitor.
[0021]
[0022] The trained classifier 216 could be implemented in two ways: by calculating statistical quantities and inputting the statistical quantities into a trained machine learning algorithm or by imputing the spectrogram image into a trained deep learning algorithm. The details of the preferred embodiments for classification are described in more detail below. If the output of the trained classifier 216 exceeds a predefined threshold 218, which was obtained through the training procedure, the vessel is classified as artery, otherwise it is classified as a vein. The steps of classification, 208-218 are repeated for each detected vessel. The procedure is concluded 220 when all the detected vessels are analysed and classified, and the procedure can be repeated for a new set of B-mode images.
[0023]
[0024] The vessel detection DL module 114 is dedicated for use with structural B-mode images. The module utilizes deep learning principles, and the preferred embodiments use trained convolutional neural networks. In a preferred embodiment, deep learning networks that utilize fast architecture are used so that the computation time for vessel detection is commensurate with real-time brightness mode image processing frame rate, at least 50 frames/second, such as YOLO, Fast R-CNN, Faster R-CNN, or other comparatively fast architectures.
[0025] A sketch of the preferred convolutional network architecture with base components is shown in
Z=ω.sup.T.Math.X+b,
where Z is the output of convolutional layer, X is the input image matrix of I×J×K size, I is the number of image columns, J is the number of image rows, K is the number of channels, ω is the matrix of weighting coefficients, which could also be called L×L convolutional filter kernel, L is the size of a filter, and b is the bias coefficients vector, which is also obtained in the training phase. Multi-dimensional convolution is followed by the ReLU function. The ReLU function changes negative values of convolutional layer output to zeros and could be expressed as follows:
ReLU(Z)=max(0,Z).
[0026] A max pooling layer 306 is dedicated for down-sampling of the detection features in feature maps. It is realized by a maximum detector and selector in the predefined size window in feature maps. The convolution and ReLU function layer 308 is then repeated followed by a max pooling layer and so on until a prediction is made 312.
[0027] Training of the vessel detection deep learning network can be completed either offline or online using training data in a server-based database. For the offline training scheme, a representative database of B-mode image sequences and annotations must be collected. The annotations mean the bounding boxes of the vessels detected and outlined manually in B-mode images by an expert who visually evaluates the images. Optionally the vessels could be verified by using spectral Doppler to verify if the detected structure is a vessel. The collection of images and annotations are passed to the training procedure. Optimal weighing coefficients are obtained by using stochastic gradient descent (SGD) method or other techniques such as the Adam optimization algorithm. In the case of SGD, neural network weighing coefficients are updated by the following formula:
where η is a learning rate, i is the training iteration number, L is the loss function, and n is the number of observations.
[0028] The online training option requires a server-based database, which is connected to the ultrasound machine controlling PC (hereafter referred to as a workplace) to obtain new images and annotations performed by an expert; a picture archiving and communication system will serve for the purpose. The weighting coefficients of the neural network in such cases are updated with each new example received from the workplace. Continual training of the neural network produces a more reliable outcome.
[0029] The Doppler beam control 116 is dedicated for automatic adjustment of sample volume (Doppler gates) position and size, which are calculated based on the predictions of the vessel detection DL module 114.
[0030] The spectrogram quality evaluation module 120 assesses whether the spectrogram is of sufficient quality for quantitative analysis. Venous flow is sometimes very weak and cannot be detected by spectral Doppler ultrasound especially if the imaging and Doppler scanning is performed in the transverse plane. The obtained spectrogram could be classified into two classes according to pixel intensities: blood flow information and background noise. For this purpose, the spectrogram 502 is binarized 504 to obtain a mask for blood flow related information extraction. The procedure of binarization is illustrated by 500 in
σ.sub.w.sup.2(t)=w.sub.0(t).Math.σ.sub.0.sup.2(t)+w.sub.1(t).Math.σ.sub.1.sup.2(t),
where w.sub.0, w.sub.1 are probabilities of the two classes 1 and 2 separated by a threshold t, and σ.sub.0.sup.2, σ.sub.1.sup.2 are variances of the classes.
[0031] In the next stage, the parameters of blood flow related pixel intensities are extracted. Two parameters of the proportion of blood flow related pixels in comparison to background are used: the ratio between the detected foreground pixels and a total number of pixels in a spectrogram, and the ratio between the sum of the intensities in the foreground and the sum of all intensities of the spectrogram. Finally, the parameters are combined into a vector and used for spectrogram classification into two classes: 1) sufficient quality and 2) insufficient quality. The optimal weights for the parameters are obtained through a training procedure. The parameters could be combined by using linear technique:
Y=w.Math.X,
where Y is the output of a linear classifier and w is a weighting coefficients vector of a parameter X. Or by using non-linear classifiers such as support vector machines or others. Output of the classifier are compared to threshold values obtained through a training procedure. If the spectrogram is classified as of sufficient quality, the spectrogram passes to the vessel classification module 122. Spectrogram of insufficient quality cannot be used because the blood flow of veins is relatively weak and could be misidentified in a spectrogram with insufficient quality.
[0032] The vessel classification module 122 can be implemented by the following two embodiments. In the first embodiment, first mean velocity is calculated from the spectrogram then the spectrogram is parametrized. Next, four statistical quantities are calculated for blood flow characterization and classification: [0033] 1. Skewness of mean velocity vs. time dependence curve. In the case of arteries, the mean velocities are more positively skewed due to the presence of higher mean velocities in the distribution, meanwhile in the case of veins the mean velocities are distributed more symmetrically. [0034] 2. Presence of periodicity. The presence of periodicity is evaluated by calculating half of the windowed autocorrelation function of the extracted mean velocity curve. The function is calculated as follows:
where γ is delay, 0≤y≤N, N is the number of samples of the mean velocity curve, x is the mean velocity value at a certain time instance. The obtained function is multiplied by a triangular window function in order to supress the peak at zero delay and to enhance peaks arising due to heart beat related pulsatility:
where 0≤n≤N, N is the number of samples in the autocorrelation function. Finally, the presence of periodicity is evaluated by finding the maximum peak of the windowed autocorrelation function. The value of the peak serves as a statistical quantity for spectrogram characterization. A higher peak value indicates that there is a periodic pattern in the mean velocity curve, which is characteristic for arteries. [0035] 3. Skewness of the windowed half of autocorrelation function. In the case of arteries, the distribution of the function is positively skewed due to the presence of peaks, which represents the periodicity; meanwhile in the case of veins, the autocorrelation function is more symmetric. [0036] 4. Hjorth parameter of signal complexity:
where M is the mobility parameter, x is the mean velocity vs. time curve. Mobility is then calculated as follows:
Where var is statistical variance. For mean velocity in veins, the signal complexity is lower while in the case of arteries, the mean velocity curve shape closely resembles a sine wave.
[0037] The statistical quantities are combined into a feature function and passed to the machine learning-based classifier, which determines if the scanned vessel belongs to an artery class or to a vein class. The trained classification algorithm could be a machine learning technique such as linear regression, non-linear, support vector machine classifier, or others.
[0038] In the second embodiment, the vessel classification module 122 is implemented by using deep learning principles, which evaluate the spectrogram directly using the principles of image recognition in a trained deep learning algorithm rather than evaluating the statistical quantities separately and combining each statistical quantity into a simplified feature function. In such case the spectrogram is passed into a trained convolutional neural network as an image and the network classifies the detected vessel to be an artery or a vein. The convolutional neural network for vessel classification architecture must be fast and the number of layers should not exceed 30. The feature extraction layers, including convolutional+ReLU, max pooling, are of a similar structure as shown in
[0039] Finally, the results of the method are represented on the display monitor 124, typical on a PC or laptop monitor via a graphical user interface 600 (