Volume acquisition method for object in ultrasonic image and related ultrasonic system
11690599 · 2023-07-04
Assignee
Inventors
Cpc classification
A61B8/463
HUMAN NECESSITIES
A61B8/5223
HUMAN NECESSITIES
A61B5/055
HUMAN NECESSITIES
A61B6/5205
HUMAN NECESSITIES
A61B6/5217
HUMAN NECESSITIES
A61B8/483
HUMAN NECESSITIES
A61B8/5207
HUMAN NECESSITIES
International classification
A61B8/00
HUMAN NECESSITIES
Abstract
An object volume acquisition method of an ultrasonic image, for a probe of an ultrasonic system is disclosed. The volume acquisition method of the object in the ultrasonic image includes collecting, by the probe, a plurality of two-dimensional ultrasonic images; obtaining the plurality of two-dimensional ultrasonic images, an offset angle, a rotation axis and a frequency of the probe corresponding to the plurality of two-dimensional ultrasonic images; segmenting a first image including an ultrasonic image object from each two-dimensional ultrasonic image of the plurality of two-dimensional ultrasonic images based on a deep learning structure; determining a contour of the ultrasonic image object; reconstructing a three-dimensional model corresponding to the ultrasonic image object according to the contour of the ultrasonic image object corresponding to the each two-dimensional ultrasonic image; and calculating a volume of the ultrasonic image object according to the three-dimensional model corresponding to the ultrasonic image object.
Claims
1. An object volume acquisition method of an ultrasonic image, for a probe of an ultrasonic system, wherein the volume acquisition method of the object in the ultrasonic image comprising: collecting, by the probe, a plurality of two-dimensional ultrasonic images; obtaining the plurality of two-dimensional ultrasonic images, an offset angle, a rotation axis and a frequency of the probe corresponding to the plurality of two-dimensional ultrasonic images; segmenting a first image including an ultrasonic image object from each two-dimensional ultrasonic image of the plurality of two-dimensional ultrasonic images based on a deep learning structure; determining a contour of the ultrasonic image object according to the first image corresponding to the each two-dimensional ultrasonic image; reconstructing a three-dimensional model corresponding to the ultrasonic image object according to the contour of the ultrasonic image object corresponding to the each two-dimensional ultrasonic image; and calculating a volume of the ultrasonic image object according to the three-dimensional model corresponding to the ultrasonic image object; wherein the deep learning structure is a U-Net network structure for determining a preliminary contour and a location of the ultrasonic image object of the first image according to the offset angle, the rotation axis and the frequency of the probe corresponding to the plurality of two-dimensional ultrasonic images; wherein the step of determining the contour of the ultrasonic image object according to the first image corresponding to the each two-dimensional ultrasonic image includes: setting an edge threshold; detecting an edge of the first image; and generating a circle inside the preliminary contour of the ultrasonic image object based on the preliminary contour and the location of the ultrasonic image object of the first image, and expanding the circle outwardly till the circle reaches the edge threshold of the first image.
2. The object volume acquisition method in the ultrasonic image of claim 1, wherein the first image is performed with histogram equalization before setting the edge threshold of the first image.
3. The object volume acquisition method in the ultrasonic image of claim 1, wherein the step of determining the contour of the ultrasonic image object according to the first image corresponding to the each two-dimensional ultrasonic image includes: performing histogram equalization on the first image; determining a binary threshold of the first image; and determining the contour of the ultrasonic image object according to the binary threshold, the preliminary contour and the location of the ultrasonic image object of the first image.
4. The object volume acquisition method in the ultrasonic image of claim 1, wherein the step of reconstructing the three-dimensional model corresponding to the ultrasonic image object according to the contour of the ultrasonic image object corresponding to the each two-dimensional ultrasonic image includes: combining the plurality of ultrasonic image objects as a three-dimensional image via a scanning method according to the plurality of two-dimensional ultrasonic images and the offset angle, the rotation axis and the frequency corresponding to the plurality of two-dimensional ultrasonic images; establishing a three-dimensional slice model based on the three-dimensional image; and establishing the three-dimensional model of the ultrasonic image object via a three-dimensional internal interpolation method based on the three-dimensional slice model.
5. The object volume acquisition method in the ultrasonic image of claim 4, wherein the step of establishing the three-dimensional model of the ultrasonic image object via the three-dimensional internal interpolation method based on the three-dimensional slice model includes: determining a maximal three-dimensional slice corresponding to the ultrasonic image object from the three-dimensional slice model; and finishing the three-dimensional model by expanding outwardly of the ultrasonic image object based on the three-dimensional slice model.
6. An ultrasonic system, for calculating a volume of ultrasonic image object, comprising: a probe, configured to collect a plurality of two-dimensional ultrasonic images; and a processor, configured to obtain the plurality of two-dimensional ultrasonic images, an offset angle, a rotation axis and a frequency of the probe corresponding to the plurality of two-dimensional ultrasonic images, segment a first image including an ultrasonic image object from each two-dimensional ultrasonic image of the plurality of two-dimensional ultrasonic images based on a deep learning structure, determine a contour of the ultrasonic image object according to the first image corresponding to the each two-dimensional ultrasonic image, reconstruct a three-dimensional model corresponding to the ultrasonic image object according to the contour of the ultrasonic image object corresponding to the each two-dimensional ultrasonic image, and calculate a volume of the ultrasonic image object according to the three-dimensional model corresponding to the ultrasonic image object; wherein the deep learning structure is a U-Net network structure for determining a preliminary contour and a location of the ultrasonic image object of the first image according to the offset angle, the rotation axis and the frequency of the probe corresponding to the plurality of two-dimensional ultrasonic images; wherein the processor is configured to set an edge threshold, detect an edge of the first image, generate a circle inside the preliminary contour of the ultrasonic image object based on the preliminary contour and the location of the ultrasonic image object of the first image, and expand the circle outwardly till the circle reaches the edge threshold of the first image.
7. The ultrasonic system of claim 6, wherein before the edge threshold of the first image is set, the processor is configured to perform histogram equalization on the first image.
8. The ultrasonic system of claim 6, wherein the processor is configured to perform histogram equalization on the first image, determine a binary threshold of the first image and determine the contour of the ultrasonic image object according to the binary threshold, the preliminary contour and the location of the ultrasonic image object of the first image.
9. The ultrasonic system of claim 6, wherein the processor is configured to combine the plurality of ultrasonic image objects as a three-dimensional image via a scanning method according to the plurality of two-dimensional ultrasonic images and the offset angle, the rotation axis and the frequency corresponding to the plurality of two-dimensional ultrasonic images, establish a three-dimensional slice model based on the three-dimensional image, and establish the three-dimensional model of the ultrasonic image object via a three-dimensional interpolation method based on the three-dimensional slice model.
10. The ultrasonic system of claim 9, wherein the processor is configured to determine a maximal three-dimensional slice corresponding to the ultrasonic image object from the three-dimensional slice model, and finish the three-dimensional model by expanding outwardly of the ultrasonic image object based on the three-dimensional slice model.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
DETAILED DESCRIPTION
(8) Refer to
(9) In detail, please refer to
(10) Step 202: Start.
(11) Step 204: Obtain the two-dimensional ultrasonic images and the offset angle, the rotation axis and the frequency of the probe corresponding to the two-dimensional ultrasonic images.
(12) Step 206: Segment the first image including the ultrasonic image object from each two-dimensional ultrasonic image of the two-dimensional ultrasonic images based on the deep learning structure.
(13) Step 208: Determine a contour of the ultrasonic image object according to the first image corresponding to each of the two-dimensional ultrasonic images.
(14) Step 210: Reconstruct the three-dimensional model corresponding to the ultrasonic image object, according to the contour of the ultrasonic image object corresponding to each of the two-dimensional ultrasonic images.
(15) Step 212: Calculate the volume of the ultrasonic image object according to the three-dimensional model corresponding to the ultrasonic image object.
(16) Step 214: End.
(17) First, in step 204, the ultrasonic system 10 may utilize the probe 102 to collect the two-dimensional ultrasonic images and the offset angle, the rotation axis and the frequency of the probe 102 corresponding to the two-dimensional ultrasonic images to increase characteristics of the ultrasonic image object in the two-dimensional ultrasonic images. More specifically, the offset angle, the rotation axis and the frequency of the probe 102 may be utilized for establishing the three-dimensional model of the ultrasonic image object.
(18) Then, in step 206, the ultrasonic system 10 segments the first image including the ultrasonic image object from each two-dimensional ultrasonic image of the two-dimensional ultrasonic images based on the deep learning structure. In an embodiment, the ultrasonic system 10 according to an embodiment of the present invention may segment the first image including the ultrasonic image object from each of the two-dimensional ultrasonic images based on a deep learning structure for semantic segmentation of in a U-Net network structure, wherein the deep learning structure for semantic segmentation classifies each of pixels in a given image to obtain a target image.
(19) In detail, the ultrasonic system 10 according to an embodiment of the present invention adopts the deep learning structure and a self-learning method to detect a preliminary contour and a location of the ultrasonic image object according to the two-dimensional ultrasonic images and the probe corresponding to the two-dimensional ultrasonic images, so as to position the ultrasonic image object and segment the first image including the ultrasonic image object in the two-dimensional ultrasonic images. Notably, the deep learning structure of the ultrasonic system 10 according to an embodiment of the present invention is not limited to the U-Net network structure; other structures which may detect the ultrasonic image object in the two-dimensional ultrasonic images are applicable to the present invention.
(20) Since a computation of the U-Net network structure for segmenting the ultrasonic image object in the two-dimensional ultrasonic images is too high, the ultrasonic system 10 according to an embodiment of the present invention proportionally shrinks the deep learning structure and embeds the deep learning structure into a machine of the ultrasonic system 10 to finish the positioning step of the ultrasonic image object.
(21) In step 208, the ultrasonic system 10 is configured to determine the contour and the location of the ultrasonic image object based on the first image of each of the two-dimensional ultrasonic images obtained in step 206. In an embodiment, the ultrasonic system 10 is configured to determine the preliminary contour and the location of the ultrasonic image object based on a contour determination method 30. In detail, refer to
(22) Step 302: Start.
(23) Step 304: Obtain the first image of each of the two-dimensional ultrasonic images.
(24) Step 306: Perform histogram equalization on the first image of each of the two-dimensional ultrasonic images.
(25) Step 308: Determine the location and a range of the first image of each of the two-dimensional ultrasonic images after the histogram equalization and of the deep learning structure for semantic segmentation.
(26) Step 310: Determine the contour of the ultrasonic image object based on an activation function and/or a binary threshold for the first image of each of the two-dimensional ultrasonic images after the histogram equalization and of the deep learning structure for semantic segmentation.
(27) Step 312: End.
(28) In order to precisely determine the contour and the location of the ultrasonic image object in the two-dimensional ultrasonic images, the ultrasonic system 10 according to an embodiment of the present invention may determine the contour of the ultrasonic image object based on two different methods of the contour determination method 30. In step 304, the first image is obtained based on the deep learning structure, wherein the preliminary contour of the object is included in the first image. In step 306, the first image of each of the two-dimensional ultrasonic images is performed with the histogram equalization to increase a contrast of the first image. In step 308, the deep learning structure for semantic segmentation is utilized for determining the location and the range of the object in the first image of each of the two-dimensional ultrasonic images after the histogram equalization. Then, in step 310, the contour of the ultrasonic image object is determined with the activation function and/or the binary threshold for the first image of each of the two-dimensional ultrasonic images after the deep learning structure for semantic segmentation, wherein the activation function determines the contour and the location of the ultrasonic image object by expanding the contour of the ultrasonic image object outwardly, which is obtained in step 210; the binary threshold determines the contour and the location of the ultrasonic image object according to the contour of the ultrasonic image object obtained in step 210.
(29) Notably, the above contour determination method 30 may simultaneously adopt the activation function and the binary threshold to determine the contour and the location of the ultrasonic image object. Alternatively, in another embodiment, the ultrasonic system 10 according to an embodiment of the present invention may determine the contour and the location of the ultrasonic image object according to the activation function or the binary threshold, which is within the scope of the present invention.
(30) Regarding the method of determining the contour and the location of the ultrasonic image object based on the activation function, please refer to
(31) Step 402: Start.
(32) Step 404: Set an edge threshold of the first image.
(33) Step 406: Perform inverse Gaussian gradient on the first image.
(34) Step 408: Detect an edge of the first image.
(35) Step 410: Generate a circle inside the preliminary contour of the ultrasonic image object based on the preliminary contour and the location of the ultrasonic image object of the first image, and expand the circle outwardly till the circle reaches the edge threshold of the first image.
(36) Step 412: End.
(37) In step 404, the edge threshold of the first image is set as a stopping point when the preliminary contour expands outwardly. In step 406, the inverse Gaussian gradient is performed on the first image to blur the first image. In step 408, the edge of the image is detected. In step 410, according to the preliminary contour and location of the ultrasonic image object in the first image, the circle inside the preliminary contour of the ultrasonic image object is generated and the circle is expanded outwardly till reaching the edge threshold of the first image, i.e. the stopping point. Therefore, the ultrasonic system 10 according to an embodiment of the present invention may determine the contour and the location of the ultrasonic image object based on the preliminary contour of the ultrasonic image object and the contour determination method 40.
(38) On the other hand, regarding the method of determining the contour and the location of the ultrasonic image object based on the binary threshold, please refer to
(39) Step 502: Start.
(40) Step 504: Determine the binary threshold of the first image.
(41) Step 506: Determine the contour of the ultrasonic image object according to the binary threshold, the preliminary contour and the location of the ultrasonic image object of the first image.
(42) Step 508: End.
(43) Based on the contour determination method 50, the ultrasonic system 10 determines the binary threshold of the first image in step 504, e.g. an 8-bit image has a grayscale value 255. In step 506, the first image is divided into two colors (e.g. black and white) based on the binary threshold, the contour of the ultrasonic image object is determined according to the preliminary contour and location of the ultrasonic image object of the first image after the histogram equalization. In an embodiment, when the ultrasonic image object is a bladder, and the grayscale value of the binary threshold of the 8-bit image is 128, pixels in the first image with grayscale value over 128 are classified as a bladder and pixels with grayscale value lower than 128 are classified as not a bladder, such that the contour determination method 50 may distinguish the bladder in the first image (i.e. the ultrasonic image object) and compare with the preliminary contour of the ultrasonic image object. Therefore, the ultrasonic system 10 according to an embodiment of the present invention may determine the contour and the location of the ultrasonic image object based on the preliminary contour of the ultrasonic image object and the contour determination method 50.
(44) Notably, domain parameters of the above contour determination methods 40, 50 are determined based on an exhaustive method, such that an optimal domain parameter of the ultrasonic image object is determined.
(45) After the contour of the ultrasonic image object corresponding to each of the two-dimensional ultrasonic images is determined in step 208, the three-dimensional model corresponding to the ultrasonic image object is reconstructed based on the contour of the ultrasonic image object determined in step 210. Regarding steps of reconstructing the three-dimensional model corresponding to the ultrasonic image object, please refer to
(46) Step 602: Start.
(47) Step 604: Combine the ultrasonic image objects as a three-dimensional image via a scanning method according to the two-dimensional ultrasonic images and the offset angle, the rotation axis and the frequency corresponding to the two-dimensional ultrasonic images.
(48) Step 606: Establish a three-dimensional slice model based on the three-dimensional image.
(49) Step 608: Establish the three-dimensional model of the ultrasonic image object via a three-dimensional internal interpolation method based on the three-dimensional slice model.
(50) Step 610: Determine a maximal three-dimensional slice corresponding to the ultrasonic image object from the three-dimensional slice model.
(51) Step 612: Finish the three-dimensional model by expanding outwardly of the ultrasonic image object based on the three-dimensional slice model.
(52) Step 614: End.
(53) In step 604, the ultrasonic system 10 combines multiple two-dimensional ultrasonic images, which are of a sequence, as the three-dimensional image via the scanning method according to each first image including the ultrasonic image object obtained in step 206 and the corresponding rotation axis and the frequency corresponding to the probe 102. In an embodiment, the scanning method may be a sector scan or a sagittal scan, which combines consecutive ultrasonic image objects as the three-dimensional ultrasonic images. In other words, the ultrasonic system 10 may establish the three-dimensional ultrasonic image based on multiple two-dimensional ultrasonic images including the ultrasonic image objects, of one sequence (e.g. 50 images), the offset angle, the rotation axis, the frequency corresponding to the probe 102 and a formula (1), wherein the formula (1) projects Y-axis of the ultrasonic image object onto Z-axis. And the formula (1) is:
(54)
(55) In formula (1), i denotes i-th ultrasonic image object of the sequence, 640 denotes horizontal pixel value of resolution, Degree.sub.i denotes the offset angle when the probe 102 performs scanning, objectDown.sub.i denotes a lower section of a bottom plane of the ultrasonic image object. Notably, the horizontal resolution of the two-dimensional image of the ultrasonic system 10 is not limited to 640.
(56) In step 606, the three-dimensional slice model is established based on the three-dimensional ultrasonic image, i.e. the three-dimensional ultrasonic image is sliced into multiple three-dimensional slices. Then, in step 608, the three-dimensional model of the ultrasonic image object is established with the three-dimensional internal interpolation method based on the three-dimensional slice model, which makes up missing parts of the ultrasonic image object, such that a complete ultrasonic image object model is finished.
(57) In an embodiment, the ultrasonic system 10 may calculate a maximal distance between two three-dimensional slices and perform the three-dimensional internal interpolation method on the two three-dimensional slices.
(58) However, since the accuracy of the three-dimensional internal interpolation method may be affected by a scanning speed and a shape of the ultrasonic image object (e.g. a shape bladder shape is oval when under examination), in step 610, the ultrasonic system 10 determines the maximal three-dimensional slice from the three-dimensional slices, and in step 612, the three-dimensional model is finished based on the maximal three-dimensional slice by expanding the ultrasonic image object outwardly, as shown in
(59) Therefore, the three-dimensional model of the ultrasonic image object may be accurately established based on the three-dimensional modeling method 60, such that the ultrasonic system 10 may determine the volume of the ultrasonic image object based on the three-dimensional model corresponding to the ultrasonic image object determined in step 212.
(60) The above embodiments illustrate that a volume acquisition method for an object in ultrasonic image and related ultrasonic system of the present invention may detect the ultrasonic image object via the deep learning structure to accurately and efficiently calculate the volume of the ultrasonic image object by establishing the three-dimensional model. In addition, according to different requirements, volume acquisition method for an object in ultrasonic image and related ultrasonic system of the present invention may be utilized for calculating image volume of computed tomography (CT) system or magnetic resonance imaging (MRI) system.
(61) In summary, embodiments of the present invention provides a volume acquisition method for an object in ultrasonic image and related ultrasonic system, which combines the deep learning structure and establishes the three-dimensional model to accurately and efficiently calculate a volume of the ultrasonic image object and improve the accuracy of detection.
(62) Those skilled in the art will readily observe that numerous modifications and alterations of the device and method may be made while retaining the teachings of the invention. Accordingly, the above disclosure should be construed as limited only by the metes and bounds of the appended claims.