METHOD FOR AUTOMATIC SEGMENTATION OF FUZZY BOUNDARY IMAGE BASED ON ACTIVE CONTOUR AND DEEP LEARNING

20220414891 · 2022-12-29

Assignee

Inventors

Cpc classification

International classification

Abstract

The present invention discloses a method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning. In the method, firstly, a fuzzy boundary image is segmented using a deep convolutional neural network model to obtain an initial segmentation result; then, a contour of a region inside the image segmented using the deep convolutional neural network model is used as an initialized contour and a contour constraint of an active contour model; and the active contour model drives, through image characteristics of a surrounding region of each contour point, the contour to move towards a target edge to derive an accurate segmentation line between a target region and other background regions. The present invention introduces an active contour model on the basis of a deep convolutional neural network model to further refine a segmentation result of a fuzzy boundary image, which has the capability of segmenting a fuzzy boundary in the image, thus further improving the accuracy of segmentation of the fuzzy boundary image.

Claims

1. A method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning, comprising the following steps: S1, segmenting a fuzzy boundary image using a deep learning model to obtain an initialized target segmentation result; and S2, fine-tuning the segmentation result of the model using an active contour model to obtain a more accurate normal boundary and fuzzy boundary segmentation result, the step specifically comprising: S2.1, initializing the active contour model using a region boundary in the initialized target segmentation result obtained in S1 to construct an initial level set; S2.2, using the level set to represent an energy function, and obtaining a partial differential equation for curve evolution through the energy function; S2.3, performing a judgment of a region in which a contour point is located; and S2.4, after determining a region in which each contour point is located, calculating a value of the partial differential equation and evolving a contour through iterations until a maximum number of iterations is reached or the contour changes slightly or does not change, and then completing the segmentation.

2. The method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning of claim 1, wherein in step S2.1, the initial level set ϕ.sub.1(x,y) of the active contour model is constructed from the segmentation result of the deep learning model, and the initial level set is defined as follows: ϕ I ( x , y ) = { D ( x , y ) , R ( x , y ) = 0 - D ( x , y ) , R ( x , y ) = 1 where R(x,y)={0,1} is the segmentation result of the deep learning model, R(x,y)=0 indicates that a point (x,y) belongs to a target region, and R(x,y)=1 indicates that the point (x,y) belongs to a non-target region; and points at a demarcation between the target region and the non-target region form a target boundary B, and D(x,y) is the shortest distance between each point (x,y) on the image and the target boundary B.

3. The method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning of claim 1, wherein in step S2.2, a total of three parts are included in the energy function: 1) the perimeter and area of the contour; 2) a contour local region energy; and 3) a contour constraint energy; and the whole energy function is defined as follows: F = u .Math. Length ( C ) + v .Math. Area ( inside ( C ) ) + λ 1 ( .Math. p B ( C ) ia ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip .Math. "\[RightBracketingBar]" 2 dxdy + .Math. p B ( C ) ia ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip + c 1 .Math. "\[RightBracketingBar]" dxdy ) + λ 2 ( .Math. p N ( C ) oa ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op .Math. "\[RightBracketingBar]" 2 dxdy + .Math. p F ( C ) oa ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op + c 2 .Math. "\[RightBracketingBar]" dxdy ) + λ 3 2 ( C - C 0 ) 2 where C denotes a current segmentation contour, C.sub.0 denotes an initialized segmentation contour, Length(C) denotes the perimeter of the contour C, Area(inside(C)) denotes the area of a region inside the contour C, μ.sub.0(x,y) is a pixel intensity of a source image I at (X,y), c.sub.1 is a pixel intensity average inside the contour C, c.sub.2 is a pixel intensity average outside the contour C, p is a point on the contour C, p ∈N(C) indicates that the contour point p is in a target edge region, p ∈F(C) indicates that the contour point p is in a foreground (target) region, p ∈B(C) indicates that the contour point p is in a background region, ia(p) is a point that is in the surrounding of the contour point p and inside the contour C, oa(p) is a point that is in the surrounding of the contour point p and outside the contour C, c.sub.ip is a pixel intensity average of points that satisfy ia(p), c.sub.op, is a pixel intensity average of points that satisfy oa(p), the surrounding of the contour point p refers to being in the range of a circle with P as the center and R as the radius; and a first term and a second term in the energy function denote the perimeter and area of the contour and serve to keep the contour continuous and smooth and are related only to the size and shape of the contour itself; a third term and a fourth term in the energy function denote a contour local region energy and serve to cause the contour to evolve towards a target boundary and is related to image data; and a fifth term in the energy function denotes a contour constraint energy and serves to limit the evolution of a current contour towards a region that greatly deviates from the initialized contour, and u, v, λ.sub.1, λ.sub.2, λ.sub.3 are coefficients of the corresponding energy terms.

4. The method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning of claim 3, wherein in the energy function F, a level set method is used to denote the contour C as well as the inside and outside of the contour; and in the level set method, the contour C in an image domain Ω is denoted as a zero level set, i.e., ϕ=0, which is defined as follows: { C = { ( x , y ) Ω : ϕ ( x , y ) = 0 } inside ( C ) = { ( x , y ) Ω : ϕ ( x , y ) > 0 } outside ( C ) = { ( x , y ) Ω : ϕ ( x , y ) < 0 } ;  the zero level set, i.e., ϕ=0, is used to denote the contour C; a Heaviside function H and a Dirac function δ.sub.0 are defined as follows: H ( z ) = { 1 , if z 0 0 if z < 0 , δ 0 ( z ) = d dz H ( z ) ; H is used to denote the inside and outside of the contour C: { inside ( C ) = H ( ϕ ( x , y ) ) = 1 outside ( C ) = H ( ϕ ( x , y ) ) = 0 ; the level set ϕ, the function H, and the function δ.sub.0 are used to denote the perimeter and area of the contour: Length { ϕ = 0 } = Ω .Math. "\[LeftBracketingBar]" H ( ϕ ( x , y ) ) .Math. "\[RightBracketingBar]" dxdy = Ω δ 0 ( ϕ ( x , y ) .Math. "\[LeftBracketingBar]" ϕ ( x , y ) .Math. "\[RightBracketingBar]" dxdy ; and Area { ϕ > 0 } = Ω H ( ϕ ( x , y ) ) dxdy ; the contour constraint energy is the difference between the current contour C and the initialized contour C.sub.0, and is denoted using the level set ϕ, the function H, and ϕ.sub.1, and the contour constraint energy is denoted as the difference between the current level set ϕ and the initialized level set ϕ.sub.1:
(C−C.sub.0).sup.2=∫.sub.Ω(H(ϕ(x,y)))−H(ϕ.sub.1(x,y)).sup.2dxdy; the contour local region energy is the sum of energies inside and outside the surrounding of all contour points; an energy of surrounding regions of the contour is calculated by calculating, separately for each contour point, energies inside and outside the contour in a local region of the contour point using a local calculation method and then superimposing the energies to obtain an overall energy; and after being denoted using the level set ϕ and the function H, terms in the energy of the surrounding regions of the contour are defined as follows: .Math. p N ( C ) ia ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip .Math. "\[RightBracketingBar]" 2 dxdy = .Math. p N ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip .Math. "\[RightBracketingBar]" 2 H ( ϕ ( x , y ) ) dxdy ; .Math. p B ( C ) ia ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip + c 1 .Math. "\[RightBracketingBar]" dxdy = .Math. p B ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip + c 1 .Math. "\[RightBracketingBar]" H ( ϕ ( x , y ) ) dxdy ; .Math. p N ( C ) oa ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op .Math. "\[RightBracketingBar]" 2 dxdy = .Math. p N ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op .Math. "\[RightBracketingBar]" 2 ( 1 - H ( ϕ ( x , y ) ) ) dxdy ; and .Math. p F ( C ) oa ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op + c 2 .Math. "\[RightBracketingBar]" dxdy = .Math. p F ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op + c 2 .Math. "\[RightBracketingBar]" ( 1 - H ( ϕ ( x , y ) ) ) dxdy where for a point p(x.sub.p, y.sub.p) on the contour point C, ϕ(p)=0; a(p) denotes a point that is in the surrounding of the contour point p, the surrounding of the contour point p referring to being in the range of a circle with P as the center and R as the radius; ia(p) denotes a point that is in the surrounding of the contour point p and inside the contour C, and for a point a(x.sub.a, y.sub.a) satisfying ia(p), ϕ(x.sub.a, y.sub.a)>0 and √{square root over ((x.sub.a−x.sub.p).sup.2+(y.sub.a−y.sub.p).sup.2)}≤R; and oa(p) denotes a point that is in the surrounding of the contour point p and outside the contour C, and for a point a(x.sub.a,y.sub.a) satisfying oa(p), ϕ(x.sub.a,y.sub.a)<0 and √{square root over ((x.sub.a−x.sub.p).sup.2+(y.sub.a−y.sub.p).sup.2)}≤R.

5. The method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning of claim 3, wherein after denoting the energy terms using the level set method, the energy function F is defined as: F = μ Ω δ 0 ( ϕ ( x , y ) ) .Math. "\[LeftBracketingBar]" ϕ ( x , y ) .Math. "\[RightBracketingBar]" dxdy + v Ω H ( ϕ ( x , y ) ) dxdy + λ 1 ( .Math. p N ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip .Math. "\[RightBracketingBar]" 2 H ( ϕ ( x , y ) ) dxdy + .Math. p B ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip + c 1 .Math. "\[RightBracketingBar]" H ( ϕ ( x , y ) ) dxdy + λ 2 ( .Math. p N ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op .Math. "\[RightBracketingBar]" 2 ( 1 - H ( ϕ ( x , y ) ) ) dxdy + .Math. p F ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op + c 2 .Math. "\[RightBracketingBar]" ( 1 - H ( ϕ ( x , y ) ) ) dxdy + λ 3 2 Ω ( H ( ϕ ( x , y ) ) ) - H ( ϕ I ( x , y ) ) 2 dxdy where c.sub.1 is the pixel intensity average inside the contour C, and c.sub.2 is the pixel intensity average outside the contour c.sub.ip which satisfy, respectively: c.sub.1(ϕ)=average (u.sub.0) in {ϕ≥0}, c.sub.2(ϕ)=average (u.sub.0) in {ϕ<0}; and c.sub.1 and c.sub.2 are defined through the level set as follows: c 1 = Ω u 0 ( x , y ) H ( ϕ ( x , y ) ) dxdy Ω H ( ϕ ( x , y ) ) dxdy ; and c 2 = Ω u 0 ( x , y ) ( 1 - H ( ϕ ( x , y ) ) ) dxdy Ω ( 1 - H ( ϕ ( x , y ) ) ) dxdy where c.sub.ip is a pixel intensity average of points satisfying ia(p) and c.sub.op is a pixel intensity average of points satisfying oa(p); and c.sub.ip(φ)=average(μ.sub.0) in {a(p) and φ≥0} and c.sub.op(φ)=average(μ.sub.0) in {a(p) and φ<0} are defined as follows: c ip = a ( p ) u 0 ( x , y ) H ( ϕ ( x , y ) ) dxdy a ( p ) H ( ϕ ( x , y ) ) dxdy ; and c op = a ( p ) u 0 ( x , y ) ( 1 - H ( ϕ ( x , y ) ) ) dxdy a ( p ) ( 1 - H ( ϕ ( x , y ) ) ) dxdy ; and a partial differential equation for curve evolution is obtained from the energy function F by means of a Euler-Lagrange variational method and a gradient descent flow as follows: ϕ t = δ ε [ u div ( ϕ .Math. "\[LeftBracketingBar]" ϕ .Math. "\[RightBracketingBar]" ) - v - λ 1 ( .Math. p N ( C ) , ( x , y ) a ( p ) ( u 0 ( x , y ) - c ip ) 2 - .Math. p B ( C ) , ( x , y ) a ( p ) ( u 0 ( x , y ) - c ip + c 1 ) 2 ) + λ 2 ( .Math. p N ( C ) , ( x , y ) a ( p ) ( u 0 ( x , y ) - c ip ) 2 - .Math. p F ( C ) , ( x , y ) a ( p ) ( u 0 ( x , y ) - c ip + c 2 ) 2 ) + λ 3 ( H ε ( ϕ ) - H ε ( ϕ I ) ) ] , where H ε ( z ) = 1 2 ( 1 + 2 π arctan ( z ε ) ) , δ ε ( z ) = ε π ( ε 2 + z 2 ) , and (x,y)∈a(p) indicates that the point (x,y) is in the surrounding of the contour point p, the surrounding of the contour point p referring to being in the range of a circle with p as the center and R as the radius; and in the process of curve evolution, the level set of the nth iteration is ϕ″ and the level set of the (n+1)th iteration is ϕ n + 1 = ϕ n + Δ t ϕ t , and partial derivatives in the horizontal direction and the vertical direction of a two-dimensional image are calculated using a finite difference approach.

6. The method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning of claim 1, wherein in step S2.3, it is judged whether the contour point p is in a target edge region or a non-target edge region by the difference between pixel intensities inside and outside the contour, and the specific method is as follows: in the fuzzy boundary image, the difference between the pixel intensity averages inside and outside the surrounding of the contour is larger in the target edge region, while the difference between the pixel intensity averages inside and outside the surrounding of the contour is smaller in the non-target edge region; and when the contour point p is in the non-target edge region, the values of c.sub.ip and c.sub.op are close to each other, i.e., c.sub.ip≈c.sub.op, |c.sub.ip−c.sub.op|≤c.sub.d, c.sub.d being a threshold for judging whether c.sub.ip is close to c.sub.op, and the judgment is performed through the following steps: S2.3.1, calculating the difference d.sub.p between c.sub.ip and c.sub.op for each contour point on the contour in counterclockwise order, and constructing a closed-loop queue D in an order obtained through d.sub.p; S2.3.2, smoothing the closed-loop queue D using a Gaussian filter of width R; S2.3.3, searching for a fragment ΔC that is longer than 2R and satisfies d.sub.p≤c.sub.d in the closed-loop queue D; and S2.3.4, if there exists a fragment satisfying step S2.33, all contour points in the fragment are in the non-target edge region and the other contour points are in the target edge region; and the sum of energies inside the contour in local regions of contour points in the target edge region is as follows: .Math. p N ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip .Math. "\[RightBracketingBar]" 2 H ( ϕ ( x , y ) ) dxdy ; and the sum of energies outside the contour in local regions of contour points in the target edge region is as follows: .Math. p N ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op .Math. "\[RightBracketingBar]" 2 ( 1 - H ( ϕ ( x , y ) ) ) dxdy .

7. The method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning of claim 6, wherein in response to the contour point p being in the non-target edge region, it is further determined whether the contour point p is in a foreground region or a background region; and since the characteristics of a surrounding region of a contour point are similar to those of a region in which the contour point is located, the fuzzy boundary image is divided into several subregions according to image characteristics, and then, for these subregions, it is determined whether the contour point p is in the foreground region or the background region, and the specific method is as follows: S2.3.5, firstly, dividing the fuzzy boundary image into several subregions according to image characteristics, and determining a subregion O in which the contour fragment ΔC is located: S2.3.6, establishing a two-dimensional coordinate system in the image subregion O with the coordinate position of a contour point in the middle of the contour fragment ΔC as the center point center (x.sub.0, y.sub.0) of a two-dimensional Gaussian function ƒ(x,y), with ⅙ of the maximum distance of x.sub.0 from a boundary of the subregion as the standard deviation σ.sub.x as of the X-axis portion of the Gaussian function, and with ⅙ of the maximum distance of y.sub.0 from the boundary of the subregion as the standard deviation σ.sub.y of the Y-axis portion of the Gaussian function; and assigning a weight w.sub.ij to each point in the subregion using the two-dimensional Gaussian function and normalizing weights w.sub.ij for the inside and outside of the contour, respectively, to obtain normalized weights w.sub.ij_in for the inside of the contour and normalized weights w.sub.ij_out for the outside of the contour; S2.3.7, calculating averages c.sub.o1 and c.sub.o2 for the inside and outside of the contour in the subregion O using the normalized weights w.sub.ij_in and w.sub.ij_out and a pixel intensity μ.sub.0(i,j), wherein when the point (i,j) is inside the contour in the subregion O), c o 1 = .Math. w ij_in .Math. μ 0 ( i , j ) N , N being the number of points inside the contour in the subregion O; and when the point (i,j) is outside the contour in the subregion O, c o 2 = .Math. w ij_out .Math. μ 0 ( i , j ) M , M being the number of points outside the contour in the subregion O; and S2.3.8, calculating a pixel intensity average m.sub.Δc, of surrounding regions of all contour points in the contour fragment ΔC, and comparing the differences between m.sub.Δc and c.sub.o1 and c.sub.o2, wherein if |m.sub.Δc−c.sub.o1|≤|m.sub.Δc−c.sub.o2|, the contour points in the contour fragment ΔC are in the foreground region, and otherwise in the background region.

8. The method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning of claim 7, wherein if the contour point p is in the foreground region, the direction of evolution of the contour point p is towards the outside of the contour, wherein in the energy function, the correction of the direction of evolution is embodied in increasing the energy of the outside of the contour in a local region of the foreground contour point, with the increased energy being defined as: .Math. p F ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op + c 2 .Math. "\[RightBracketingBar]" ( 1 - H ( ϕ ( x , y ) ) ) dxdy ; and if the contour point p is in the background region, the direction of evolution of the contour point p is towards the inside of the contour, wherein in the energy function, the correction of the direction of evolution is embodied in increasing the energy of the inside of the contour in a local region of the background contour point, with the increased energy being defined as: .Math. p B ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip + c 1 .Math. "\[RightBracketingBar]" H ( ϕ ( x , y ) ) dxdy ) .

9. The method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning of claim 1, wherein in step S2.4, the contour is evolved through iterations of ϕ n + 1 = ϕ n + Δ t ϕ t until a maximum number of iterations iter is reached or the contour changes slightly or does not change, where 200≤iter≤10000; and a contour change Δ h = .Math. i , j ( .Math. "\[LeftBracketingBar]" H ( ϕ i , j n + 1 ) - H ( ϕ i , j n ) .Math. "\[RightBracketingBar]" ) , which indicates changing of the contour, and the iteration stops in response to a plurality of successive slight changes of the contour.

Description

BRIEF DESCRIPTION OF DRAWINGS

[0044] FIG. 1 illustrates a thyroid ultrasonic image, which is an original fuzzy boundary image, in an embodiment of the present invention.

[0045] FIG. 2 illustrates a middle boundary label image of the embodiment of the present invention, where the white line indicates the schematic diagram of a thyroid region.

[0046] FIG. 3 illustrates a schematic diagram of the result of segmenting the thyroid region based on a U-Net deep convolutional neural network in the embodiment of the present invention.

[0047] FIG. 4 illustrates a schematic diagram of the result of segmenting the thyroid region based on a deep model U-Net and an active contour model in the embodiment of the present invention.

[0048] FIG. 5 illustrates a schematic diagram of a local region of a contour point p in the embodiment of the present invention.

[0049] FIG. 6 illustrates a schematic diagram of a thyroid ultrasonic transverse scanned image and the division of subregions in the embodiment of the present invention.

[0050] FIG. 7 illustrates a schematic diagram of a thyroid ultrasonic longitudinal scanned image and the division of subregions in the embodiment of the present invention.

[0051] FIG. 8 illustrates a flowchart of steps of the embodiment of the present invention.

DETAILED DESCRIPTION

[0052] The specific implementation of the present invention will be further described with reference to the drawings and embodiments below, which, however, should not be construed as a limitation on the implementation and scope of protection of the present invention. It should be noted that details which are not set forth below can be implemented by those skilled in the art with reference to the prior art.

Embodiment

[0053] A method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning, as shown in FIG. 8, includes the following steps:

[0054] In S1, for a fuzzy boundary image, such as a thyroid ultrasonic image shown in FIG. 1, a thyroid region is segmented using a trained U-Net convolutional neural network model to obtain a U-Net segmentation result image.

[0055] In S2, the segmentation result of the model is fine-tuned using an active contour model to obtain a more accurate normal boundary and fuzzy boundary segmentation result, as shown in FIG. 8, the step specifically including the following steps:

[0056] In S2.1, the active contour model is initialized using a boundary of the thyroid region in FIG. 3 to construct an initial level set ϕ.sub.1(x,y); parameters of the active contour model are set as follows: μ=1,v=0, λ.sub.1=1, λ.sub.2=1, λ.sub.3=1, Δt=0.1, R=8, c.sub.d=8, ε=1; and the initial level set is defined as follows:

[00022] ϕ I ( x , y ) = { D ( x , y ) , R ( x , y ) = 0 - D ( x , y ) , R ( x , y ) = 1

[0057] where R(x,y)={0,1} is the segmentation result of the deep learning model, R(x,y)=0 indicates that a point (x,y) belongs to a target region, and R(x,y)=1 indicates that the point (x,y) belongs to a non-target region; and points at a demarcation between the target region and the non-target region form a target boundary B, and D(x,y) is the shortest distance between each point (x,y) on the image and the target boundary B.

[0058] In S2.2, the level set is used to represent an energy function, and a partial differential equation for curve evolution is obtained through the energy function, where

[0059] a total of three parts are included in the energy function: 1) the perimeter and area of the contour; 2) a contour local region energy; and 3) a contour constraint energy; and

[0060] the whole energy function is defined as follows:

[00023] F = u .Math. Length ( C ) + v .Math. Area ( inside ( C ) ) + λ 1 ( .Math. p N ( C ) ia ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip .Math. "\[RightBracketingBar]" 2 dxdy + .Math. p B ( C ) ia ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip + c 1 .Math. "\[RightBracketingBar]" dxdy ) + λ 2 ( .Math. p N ( C ) oa ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op .Math. "\[RightBracketingBar]" 2 dxdy + .Math. p F ( C ) oa ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op + c 2 .Math. "\[RightBracketingBar]" dxdy ) + λ 3 2 ( C - C 0 ) 2

[0061] where C denotes a current segmentation contour, C.sub.0 denotes an initialized segmentation contour, Length (C) denotes the perimeter of the contour C, Area(inside(C)) denotes the area of a region inside the contour C, μ.sub.0(x,y) is a pixel intensity of a source image I at (x,y), c.sub.1 is a pixel intensity average inside the contour C, c.sub.2 is a pixel intensity average outside the contour C, p is a point on the contour C, p ∈N(C) indicates that the contour point p is in a target edge region, p ∈F(C) indicates that the contour point p is in a foreground (target) region, p ∈B(C) indicates that the contour point p is in a background region, ia(p) is a point that is in the surrounding of the contour point p and inside the contour C, oa(p) is a point that is in the surrounding of the contour point p and outside the contour C, c.sub.ip is a pixel intensity average of points that satisfy ia(p), c.sub.op is a pixel intensity average of points that satisfy oa(p), the surrounding of the contour point p refers to being in the range of a circle with P as the center and R as the radius; and a first term and a second term in the energy function denote the perimeter and area of the contour and serve to keep the contour continuous and smooth and are related only to the size and shape of the contour itself; a third term and a fourth term in the energy function denote a contour local region energy and serve to cause the contour to evolve towards a target boundary and is related to image data; and a fifth term in the energy function denotes a contour constraint energy and serves to limit the evolution of a current contour towards a region that greatly deviates from the initialized contour, and u, v, λ.sub.1, λ.sub.2, λ.sub.3 are coefficients of the corresponding energy terms.

[0062] Further, in the energy function F, a level set method is used to denote the contour C as well as the inside and outside of the contour; and in the level set method, the contour C in an image domain Ω is denoted as a zero level set, i e., ϕ=0, which is defined as follows:

[00024] { C = { ( x , y ) Ω : ϕ ( x , y ) = 0 } inside ( C ) = { ( x , y ) Ω : ϕ ( x , y ) > 0 } outside ( C ) = { ( x , y ) Ω : ϕ ( x , y ) > 0 } ;

[0063] the zero level set, i.e., ϕ=0, is used to denote the contour C;

[0064] a Heaviside function H and a Dirac function δ.sub.0 are defined as follows:

[00025] H ( z ) = { 1 , if z 0 0 , if z < 0 , δ 0 ( z ) = d dz H ( z ) ;

[0065] H is used to denote the inside and outside of the contour C

[00026] { inside ( C ) = H ( ϕ ( x , y ) = 1 outside ( C ) = H ( ϕ ( x , y ) = 0 ;

the level set ϕ, the function H and the function δ.sub.0 are used to denote the perimeter and area of the contour:

[00027] Length { ϕ = 0 } = Ω .Math. "\[LeftBracketingBar]" H ( ϕ ( x , y ) ) .Math. "\[RightBracketingBar]" dxdy = Ω δ 0 ( ϕ ( x , y ) ) .Math. "\[LeftBracketingBar]" ϕ ( x , y ) .Math. "\[RightBracketingBar]" dxdy ; and Area { ϕ > 0 } = Ω H ( ϕ ( x , y ) ) dxdy ;

[0066] the contour constraint energy is the difference between the current contour C and the initialized contour C.sub.0, and is denoted using the level set ϕ, the function H, and ϕ.sub.1, and the contour constraint energy is denoted as the difference between the current level set ϕ and the initialized level set ϕ.sub.1:


(C−C.sub.0).sup.2=∫.sub.Ω(H(ϕ(x,y)))−H(ϕ.sub.1(x,y)).sup.2dxdy;

[0067] the contour local region energy is the sum of energies inside and outside the surrounding of all contour points; an energy of surrounding regions of the contour is calculated by calculating, separately for each contour point, energies inside and outside the contour in a local region of the contour point using a local calculation method and then superimposing the energies to obtain an overall energy; and after being denoted using the level set ϕ and the function H, terms in the energy of the surrounding regions of the contour are defined as follows:

[00028] .Math. p N ( C ) ia ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip .Math. "\[RightBracketingBar]" 2 dxdy = .Math. p N ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip .Math. "\[RightBracketingBar]" 2 H ( ϕ ( x , y ) ) dxdy ; .Math. p B ( C ) ia ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip + c 1 .Math. "\[RightBracketingBar]" dxdy = .Math. p B ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip + c 1 .Math. "\[RightBracketingBar]" H ( ϕ ( x , y ) ) dxdy ; .Math. p N ( C ) oa ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op .Math. "\[RightBracketingBar]" 2 dxdy = .Math. p N ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op .Math. "\[RightBracketingBar]" 2 ( 1 - H ( ϕ ( x , y ) ) ) dxdy ; and .Math. p F ( C ) oa ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op + c 2 .Math. "\[RightBracketingBar]" dxdy = .Math. p F ( C ) oa ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op + c 2 .Math. "\[RightBracketingBar]" ( 1 - H ( ϕ ( x , y ) ) ) dxdy

[0068] where for a point p(x.sub.p,y.sub.p) on the contour point C, ϕ(p)=0 a(p) indicates a point that is in the surrounding of the contour point p, the surrounding of the contour point p referring to being in the range of a circle with P as the center and R as the radius; ia(p) denotes a point that is in the surrounding of the contour point p and inside the contour C and for a point a(x.sub.a, y.sub.a) satisfying ia(p), ϕ(x.sub.a,y.sub.a)>0 and √{square root over ((x.sub.a−x.sub.p).sup.2+(y.sub.a−y.sub.p).sup.2)}≤R, and oa(p) denotes a point that is in the surrounding of the contour point p and outside the contour C and for a point al(x.sub.a,y.sub.a) satisfying oa(p), ϕ(x.sub.a,y.sub.a)≤0 and √{square root over ((x.sub.a−x.sub.p).sup.2+(y.sub.a−y.sub.p).sup.2)}≤R.

[0069] Further, after denoting the energy terms using the level set method, the energy function F is defined as:

[00029] F = μ Ω δ 0 ( ϕ ( x , y ) ) .Math. "\[LeftBracketingBar]" ϕ ( x , y ) .Math. "\[RightBracketingBar]" dxdy + v Ω H ( ϕ ( x , y ) ) dxdy + λ 1 ( .Math. p N ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip .Math. "\[RightBracketingBar]" 2 H ( ϕ ( x , y ) ) dxdy + .Math. p B ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip + c 1 .Math. "\[RightBracketingBar]" H ( ϕ ( x , y ) ) dxdy ) + λ 2 ( .Math. p N ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op .Math. "\[RightBracketingBar]" 2 ( 1 - H ( ϕ ( x , y ) ) ) dxdy + .Math. p F ( C ) oa ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op + c 2 .Math. "\[RightBracketingBar]" ( 1 - H ( ϕ ( x , y ) ) ) dxdy ) + λ 3 2 Ω ( H ( ϕ ( x , y ) ) ) - H ( ϕ I ( x , y ) ) 2 dxdy

[0070] where c.sub.1 is the pixel intensity average inside the contour C, and c.sub.2 is the pixel intensity average outside the contour C, which satisfy, respectively: c.sub.1(ϕ)=average (μ.sub.0) in {ϕ≥0}, c.sub.2 (ϕ)=average (μ.sub.0) in {ϕ<0 } and c.sub.1 and c.sub.2 are defined through the level set ϕ as follows:

[00030] c 1 = Ω u 0 ( x , y ) H ( ϕ ( x , y ) ) dxdy Ω H ( ϕ ( x , y ) ) dxdy ; and c 2 = Ω u 0 ( x , y ) ( 1 - H ( ϕ ( x , y ) ) ) dxdy Ω ( 1 - H ( ϕ ( x , y ) ) ) dxdy

where c.sub.ip is a pixel intensity average of points satisfying ia(p) and c.sub.op is a pixel intensity average of points satisfying oa(p); and c.sub.ip average(μ.sub.0) in {a(p) and φ≥0} and c.sub.op(φ)=average(μ.sub.0) in {a(p) and φ<0} are defined as follows:

[00031] c ip = a ( p ) u 0 ( x , y ) H ( ϕ ( x , y ) ) dxdy a ( p ) H ( ϕ ( x , y ) ) dxdy ; and c op = a ( p ) u 0 ( x , y ) ( 1 - H ( ϕ ( x , y ) ) ) dxdy a ( p ) ( 1 - H ( ϕ ( x , y ) ) ) dxdy ;

[0071] a partial differential equation for curve evolution is obtained from the energy function F by means of a Euler-Lagrange variational method and a gradient descent flow as follows:

[00032] ϕ t = δ ε [ u div ( ϕ .Math. "\[LeftBracketingBar]" ϕ .Math. "\[RightBracketingBar]" ) - v - λ 1 ( .Math. p N ( C ) , ( x , y ) a ( p ) ( u 0 ( x , y ) - c ip ) 2 - .Math. p B ( C ) , ( x , y ) a ( p ) ( u 0 ( x , y ) - c ip + c 1 ) 2 ) + λ 2 ( .Math. p N ( C ) , ( x , y ) a ( p ) ( u 0 ( x , y ) - c ip ) 2 - .Math. p F ( C ) , ( x , y ) a ( p ) ( u 0 ( x , y ) - c ip + c 2 ) 2 ) + λ 3 ( H ε ( ϕ ) - H ε ( ϕ I ) ) ]

[0072] where

[00033] H ε ( z ) = 1 2 ( 1 + 2 π arctan ( z ε ) ) , δ ε ( z ) = ε π ( ε 2 + z 2 ) ,

and (x,y)∈a(p) indicates that the point (x,y) is in the surrounding of the contour point p, the surrounding of the contour point p referring to being in the range of a circle with P as the center and R as the radius; and in the process of curve evolution, the level set of the nth iteration is ϕ″ and the level set of the (n+1)th iteration is

[00034] ϕ n + 1 = ϕ n + Δ t ϕ t ,

and partial derivatives in the horizontal direction and the vertical direction of a two-dimensional image are calculated using a finite difference approach.

[0073] In S2.3, a judgment of a region in which a contour point is located is performed. As shown in FIG. 5, the black line block denotes an image region, the closed black curve is the contour C, the region inside the contour C is denoted as Inside (C), the region outside of the contour C is denoted as Outside (C), a point p is a point on the contour C, ia(p) is a region in the surrounding of the contour point p and inside the contour C, oa(p) is a region in the surrounding of the contour point p and outside the contour C, and the surrounding of the contour point p refers to being in the range of a circle with P as the center and R as the radius, such as the circle drawn by the black dashed line in the figure.

[0074] It is judged whether the contour point p is in a target edge region or a non-target edge region by the difference between pixel intensities inside and outside the contour, and the specific method is as follows: in the fuzzy boundary image, the difference between the pixel intensity averages inside and outside the surrounding of the contour is larger in the target edge region, while the difference between the pixel intensity averages inside and outside the surrounding of the contour is smaller in the non-target edge region; and when the contour point p is in the non-target edge region, the values of c.sub.ip and c.sub.op, are close to each other, i.e., c.sub.ip˜c.sub.op, |c.sub.ip−c.sub.p|≤c.sub.d, c.sub.d being a threshold for judging whether c.sub.ip is close to c.sub.op, and as shown in FIG. 8, the judgment is performed through the following steps:

[0075] in S2.3.1, the difference d.sub.p between c.sub.ip, and c.sub.op for each contour point on the contour is calculated in counterclockwise order, and a closed-loop queue D is constructed in an order obtained through d.sub.p;

[0076] in S2.3.2, the closed-loop queue D is smoothed using a Gaussian filter of width R;

[0077] in S2.3.3, a fragment ΔC that is longer than 2R and satisfies d.sub.p≤c.sub.d is searched for in the closed-loop queue D; and

[0078] in S2.3.4, if there exists a fragment satisfying step S2.3.3, all contour points in the fragment are in the non-target edge region and the other contour points are in the target edge region; and the sum of energies inside the contour in local regions of contour points in the target edge region is as follows:

[00035] .Math. p N ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip .Math. "\[RightBracketingBar]" 2 H ( ϕ ( x , y ) ) dxdy ;

[0079] the sum of energies outside the contour in local regions of contour points in the target edge region is as follows:

[00036] .Math. p N ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op .Math. "\[RightBracketingBar]" 2 ( 1 - H ( ϕ ( x , y ) ) ) dxdy .

[0080] Further, in response to the contour point p being in the non-target edge region, it is further determined whether the contour point p is in a foreground region or a background region; and since the characteristics of a surrounding region of a contour point are similar to those of a region in which the contour point is located, the fuzzy boundary image is divided into several subregions according to image characteristics, and then, for these subregions, it is determined whether the contour point p is in the foreground region or the background region. In this embodiment, thyroid ultrasonic images are used as test images. The thyroid ultrasonic images are classified into a transverse scanned image and a longitudinal scanned image, as shown in FIGS. 6 and 7. The left and right segmentation line in FIG. 6 separates the bronchial and carotid regions, and the upper and lower segmentation line reduces the effect of acoustic attenuation, where the pixel intensity in part of the thyroid ultrasonic image decreases with depth and the upper part is generally brighter than 1R the lower part, while dividing the muscle region. The upper and lower segmentation line in FIG. 7 similarly reduces the effect of acoustic attenuation while dividing the muscle region. In these subregions, for the subregions A, B, C, and D, it is determined whether the contour point p is in the foreground region or the background region, and the specific steps are as follows:

[0081] in S2.3.5, firstly, the fuzzy boundary image is divided into several subregions according to image characteristics, and a subregion O ∈{A, B, C, D} in which the contour fragment ΔC is located is determined;

[0082] in S23.6, a two-dimensional coordinate system is established in the image subregion O with the coordinate position of a contour point in the middle of the contour fragment ΔC as the center point center(x.sub.0,y.sub.0), thus obtaining a two-dimensional Gaussian function

[00037] f ( x , y ) = exp ( - ( ( x - x 0 ) 2 2 σ x 2 + ( y - y 0 ) 2 2 σ y 2 ) ) ,

with ⅙ of the maximum distance of x.sub.0 from a boundary of the subregion as the standard deviation a, of the X-axis portion of the Gaussian function, and with ⅙ of the maximum distance of y, from the boundary of the subregion as the standard deviation σ.sub.y of the Y-axis portion of the Gaussian function; and a weight IV, is assigned to each point in the subregion using the two-dimensional Gaussian function and weights W, for the inside and outside of the contour are normalized, respectively, to obtain normalized weights w.sub.ij_in for the inside of the contour and normalized weights w.sub.ij_out for the outside of the contour;

[0083] in S2.3.7, averages c.sub.o1 and c.sub.o2 for the inside and outside of the contour in the subregion O are calculated using the normalized weights w.sub.ij_in and w.sub.ij_out and a pixel intensity μ.sub.0(i,j), where when the point (i,j) is inside the contour in the subregion O,

[00038] c o 1 = .Math. w ij_in .Math. μ 0 ( i , j ) N ,

N being the number of points inside the contour in the subregion O; and when the point (I,j) is outside the contour in the subregion O,

[00039] c o 2 = .Math. w ij_out .Math. μ 0 ( i , j ) M ,

M being the number of points outside the contour in the subregion O; and

[0084] in S2.3.8, a pixel intensity average mA of surrounding regions of all contour points in the contour fragment ΔC is calculated, and the differences between m.sub.Δc and c.sub.o1 and c.sub.o2 are compared, where if mA|m.sub.Δc−c.sub.o1|≤|m.sub.Δc−c.sub.o2| the contour points in the contour fragment ΔC are in the foreground region, and otherwise the contour points are in the background region; and

[0085] if the contour point p is in the foreground region, the direction of evolution of the contour point p is towards the outside of the contour, wherein in the energy function, the correction of the direction of evolution is embodied in increasing the energy of the outside of the contour in a local region of the foreground contour point, with the increased energy being defined as:

[00040] .Math. p F ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c op + c 2 .Math. "\[RightBracketingBar]" ( 1 - H ( ϕ ( x , y ) ) ) dxdy ;

[0086] if the contour point p is in the background region, the direction of evolution of the contour point p is towards the inside of the contour, wherein in the energy function, the correction of the direction of evolution is embodied in increasing the energy of the inside of the contour in a local region of the background contour point, with the increased energy being defined as:

[00041] .Math. p B ( C ) a ( p ) .Math. "\[LeftBracketingBar]" μ 0 ( x , y ) - c ip + c 1 .Math. "\[RightBracketingBar]" H ( ϕ ( x , y ) ) dxdy ) .

[0087] In S2.4, after determining a region in which each contour point is located, a value of the partial differential equation is calculated, and a contour is evolved through iterations of

[00042] ϕ n + 1 = ϕ n + Δ t ϕ t

until a maximum number of iterations iter 1000 is reached or the contour changes slightly or does not change, and then the segmentation is completed, where a contour change

[00043] Δ h = .Math. i , j ( .Math. "\[LeftBracketingBar]" H ( ϕ i , j n + 1 ) - H ( ϕ i , j n ) .Math. "\[RightBracketingBar]" ) ,

which indicates changing of the contour, and the iteration stops in response to a plurality of successive slight changes of the contour.

[0088] In this embodiment, FIG. 2 illustrates a standard segmented image, labeled by an experienced doctor. The segmentation result of U-Net in FIG. 3 presents problems of segmentation errors and under-segmentation. However, after using the active contour model, as shown in FIG. 4, regions with segmentation errors are removed from the result image and the contour is enabled to expand outward in the fuzzy region to cover part of the under-segmented region.

[0089] The purpose of the method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning is to enable the segmentation model to segment fuzzy boundary regions while fine-tuning the segmentation contour so that the segmentation contour is as close to the target boundary as possible. The present invention adopts a combination of a deep convolutional network model and an active contour model to enable the model to achieve accurate segmentation results. The experimental data of the present invention are thyroid ultrasonic images, and the data set contains 309 images, of which 150 are used as the training set and the remaining 159 are used as the test set. The U-Net model is trained using the 150 training images, and the trained model segments the 159 test images, and then the U-Net segmentation results are further fine-tuned using the active contour model. The quantitative indexes of the segmentation results are as follows:

[00044] Accuracy = TP + TN A P + A N , PPV = TP TP + FP , and IoU = TP FN + TP + FP

where TP, TN, FP, FN, A.sub.p,and A.sub.N denote True Positive, True Negative, False Positive, False Negative, All Positive, and All Negative, respectively. The average quantitative indexes obtained after segmentation of the 159 images are as shown in Table 1.

TABLE-US-00001 TABLE 1 Quantitative index Accuracy PPV IOU U-Net 0.9922 0.9090 0.8872 The present 0.9933 0.9278 0.9026 invention

[0090] As can be seen from the above table, the present invention combines the U-Net model and the active contour model, and achieves a higher accuracy of pixel classification in terms of fine-grained segmentation compared to the approach in which only the U-Net model is used, where Accuracy reaches 0.9933; 0.9278 of the region segmented as the thyroid is the correct thyroid region, with an improvement of 2.78% in accuracy; and the intersection over union of the region that is divided as the thyroid versus the true thyroid region is 0,9026, with an improvement of 1.54% compared to the approach in which only the U-Net model is used. The improvements of the present invention regarding the quantitative indexes Accuracy, PPV, and IOU indicate that the present invention can further improve the accuracy of target segmentation in fuzzy images and can yield fine and accurate segmentation results for fuzzy boundaries. In the present invention, the use of an active contour model based on U-Net leads to better fuzzy boundary image segmentation results. The method for automatic segmentation of a fuzzy boundary image based on active contour and deep learning has the capability of segmenting a fuzzy boundary in a fuzzy boundary image while fine-tuning the segmentation contour to cause the contour to move close to the target boundary.