METHOD FOR IDENTIFYING THE ANISOTROPY OF THE TEXTURE OF A DIGITAL IMAGE

20170294025 · 2017-10-12

    Inventors

    Cpc classification

    International classification

    Abstract

    A method for determining an extent of anistropy of texture of an image in a manner that avoids inaccuracies arising from interpolation.

    Claims

    1-8. (canceled)

    9. A method comprising identifying anisotropy of texture of a digital image, wherein identifying said anisotropy comprises acquiring a digital image formed from pixels, wherein each pixel is associated with a luminous intensity and with a position in a space custom-character.sup.d, wherein d is a natural integer greater than or equal to two; automatically transforming said acquired image to obtain a transformed image I.sub.j,k, wherein automatically transforming includes applying a modification T.sub.j,k to said acquired image, wherein said modification causes each pixel of said acquired image to rotate by an angle α.sub.j from one position to another about a point or an axis, wherein said modification enlarges or reduces said acquired image by a factor γ.sub.k, wherein u.sub.jk is a vector of custom-character.sup.2\{(0,0)} that completely characterizes said modification T.sub.j,k, wherein said indices j and k respectively and uniquely identify said angle α.sub.j and said factor γ.sub.k, wherein α.sub.j=arg(u.sub.jk) and γ.sub.k=|u.sub.jk|.sup.2, wherein, for d=2, said modification T.sub.j,k takes the form T j , k = γ k ( cos ( α j ) - sin ( α j ) sin ( α j ) cos ( α j ) ) wherein, for d=3, said modification takes one of the following matrix forms or a composition of one of said following matrix forms: T j , k = ( cos ( α j ) - sin ( α j ) 0 sin ( α j ) cos ( α j ) 0 0 0 γ k ) , or .Math. ( cos ( α j ) 0 - sin ( α j ) 0 γ k 0 sin ( α j ) 0 cos ( α j ) ) , or .Math. ( γ k 0 0 0 cos ( α j ) - sin ( α j ) 0 sin ( α j ) cos ( α j ) ) . wherein said transformed image is obtained by multiplying coordinates of position of each pixel by said modification T.sub.j,k, and, for each image to which a modification T.sub.j,k has been applied, calculating a K-increment V.sub.j,k[m] for each pixel at a position m of said transformed image, said K-increment being calculated by application of a convolution kernel v by means of the following formula: V j , k [ m ] = .Math. p [ 0 , L ] d .Math. v [ p ] .Math. Z [ m - T j , k .Math. p ] wherein T.sub.j,k.Math.p corresponds to application of said modification T.sub.j,k to a pixel that was initially at the position p in the image I; wherein said convolution kernel v performs linear filtering having a characteristic polynomial Q.sub.v(z) and a finite support [0,L].sup.d, wherein v[p] is the value of the convolution kernel v for the position p, wherein said characteristic polynomial Q.sub.v(z) is defined by: z d , Q v ( z ) = .Math. p [ 0 , L ] d .Math. v [ p ] .Math. z p wherein said characteristic polynomial satisfies the following condition: a [ 0 , K ] d .Math. .Math. such .Math. .Math. that .Math. .Math. .Math. a .Math. K .Math. .Math. then .Math. a .Math. .Math. Q v z 1 a 1 .Math. .Math. .Math. .Math. .Math. z d a d .Math. ( 1 , .Math. .Math. , 1 ) = 0 wherein L is a vector acquired from [0,N].sup.d that sets the parameters of the kernel v; wherein N is a vector belonging to custom-character.sup.d that codes the size of the image and the components of which are strictly positive natural integers; wherein K is an acquired non-zero natural integer; wherein z a vector with components z.sub.1, z.sub.2, . . . , z.sub.d; z.sup.p that designates the monomial z.sub.1.sup.p1.Math.z.sub.2.sup.p2.Math. . . . z.sub.d.sup.pd; wherein ∂.sup.|a|Q.sub.v/∂z.sub.1.sup.a1 . . . ∂z.sub.d.sup.ad is the partial derivative of the polynomial Q.sub.v(z) with respect to the components of the vector z, wherein ∂z.sub.i.sup.ai indicates differentiation of Q.sub.v(z) of order a.sub.i with respect to the variable z.sub.i, wherein z.sub.i designates an i.sup.th component of said vector z and a.sub.i designates an i.sup.th component of said vector a, wherein i is an integer index greater than or equal to 0 and less than or equal to d, wherein calculation of said K-increments is based on only the pixels of the image that occupy a position m belonging to a set E, wherein said set E includes only positions m that exist already in said image I and that, regardless of said modification T.sub.j,k, following application of said modification T.sub.j,k, occupy a position that also exists already in said image I and for which said position “m−T.sub.j,k.Math.p” occupies a position that also exists already in said image I; wherein said transformed image I.sub.j,k is obtained following the calculation of said K-increment including only pixels whose positions are in the set E, wherein automatically transforming said acquired image to obtain a transformed image I.sub.j,k is executed with at least two different angles α.sub.j and, for each angle α.sub.j, with at least two different factors γ.sub.k, so as to obtain at least four different transformed images I.sub.j,k; calculating, for each different transformed image I.sub.j,k, a p-variation W.sub.j,k of said transformed image from said calculated K-increments; estimating the terms β.sub.j of the statistical regression log(|W.sub.j,k|)=log(|u.sub.jk|.sup.2).Math.H+β.sub.j+ε.sub.j,k, where H is the Hurst exponent of the acquired image and ε.sub.j,k is an error term of the regression, the statistical properties of which are predetermined; testing equality of said terms β.sub.j relative to one another to within a predetermined error margin, and identifying texture of said image being identified as isotropic if equality is found following said test and identifying said texture as anisotropic otherwise.

    10. The method of claim 9, wherein automatically transforming said acquired image to obtain a transformed image I.sub.j,k comprises using a kernel v that is equal to the convolution product of any convolution kernel with the kernel defined as follows: v [ p ] = ( - 1 ) .Math. p .Math. × C L 1 p 1 × .Math. × C L d p d = ( - 1 ) .Math. p .Math. × .Math. i = 1 d .Math. .Math. L i ! p i ! × ( L i - p i ) ! if the vector p belongs to [0,L].sup.d and v[p]=0 otherwise, where the terms C.sup.p.sub.L designate binomial coefficients, the constant K then being equal to K=|L|−1.

    11. The method of claim 9, wherein calculating, for each different transformed image I.sub.j,k, a p-variation W.sub.j,k of said transformed image from said calculated K-increments comprises calculating p-variations that are quadratic variations in accordance with the following formula: W j , k [ m ] = 1 n E .Math. .Math. m E .Math. ( V j , k [ m ] ) q wherein q=2 and wherein n.sub.E is the number of positions that belong to the set E.

    12. The method of claim 9, wherein testing equality of said terms β.sub.j relative to one another comprises testing using a Fisher statistical test.

    13. The method as claimed in claim 12, wherein estimating the terms β.sub.j of the statistical regression includes automatically calculating the p-value of the statistical test for quantifying the degree of anisotropy of the image.

    14. The method of claim 13, further comprising automatically classifying digital images as a function of the anisotropy of their texture, said method comprising acquiring a plurality of images, each formed of a plurality of pixels, for each of the acquired images, automatically calculating their respective degrees of anisotropy using the method from claim 13, and classifying said acquired digital images by means of an automatic classifier as a function of the degree of anisotropy calculated for each of said images.

    15. A tangible and non-transitory computer-readable information storage medium having encoded thereon instructions for the execution of a method recited in claim 9 on an electronic computer.

    16. An apparatus comprising an electronic computer configured to execute the method recited in claim 9.

    Description

    BRIEF DESCRIPTION OF THE DRAWINGS

    [0013] The invention will be better understood on reading the following description given by way of non-limiting example only and with reference to the drawings, in which:

    [0014] FIGS. 1A to 1D are diagrams showing digital images having different properties of isotropy or anisotropy;

    [0015] FIG. 2 is a diagram showing a computing device for characterizing the anisotropy of a digital image from FIGS. 1A to 1D;

    [0016] FIG. 3 is a flowchart of a method for characterizing the anisotropy of the texture of a digital image;

    [0017] FIG. 4 is a flowchart of a method for automatic classification of images as a function of the anisotropy of their texture.

    [0018] In the above figures, the same references are used to designate the same elements.

    DETAILED DESCRIPTION

    [0019] In this description, the following mathematical notation conventions are adopted, unless otherwise indicated: [0020] the range [X,Y] designates the range of all whole numbers greater than or equal to X and less than or equal Y, with X and Y being integers; [0021] a vector A in a space with d dimensions (such as custom-character.sup.d) has coordinates A.sub.1, A.sub.2, . . . , A.sub.d; [0022] [0,X].sup.d designates the product [0,X.sub.1]x[0,X.sub.2]x . . . x[0,X.sub.d] where X is a vector of custom-character.sup.d with coordinates X.sub.1, X.sub.2, . . . , X.sub.d, such that the i.sup.th coordinate U, of a vector U of [0, X].sup.d belongs to the range [0,X], where i is an index greater than or equal to 0 and less than or equal to d; and [0023] |X| is the sum of the components of the vector X, such that |X|=X.sub.1+X.sub.2 X.sub.d.

    [0024] FIG. 1A shows a digital image 2 that has an anisotropic texture.

    [0025] In this description, the anisotropy is to be understood as meaning that the properties of the image's texture are not the same in all directions but instead differ depending on the direction in which they are observed.

    [0026] The texture of a digital image is generally defined as relating to the spatial distribution of intensity variations and/or tonal variations of the pixels forming the digital image. The texture is a manifestation of the Holderian regularity of the image. This concept of “texture” is defined in “Handbook of Texture Analysis,” M. Mirmehdi et al., eds., World Scientific, October 2008 in the chapter “Introduction to Texture Analysis” by E. R. Davies, as well as in the section “I. Introduction” of the paper by Robert M. Haralick et al entitled “Textural features for image classification,” IEEE Transactions on Systems, Man and Cybernetics; vol. SMC-3, n° 6, p. 610-621, November 1973.

    [0027] Anisotropy of an image can arise from two factors: “texture” and “tendency.” “Texture” typically corresponds to the intensity variations of the pixels at short range, i.e., at high frequency; “tendency” relates to intensity variations of the pixels at longer range, i.e., at low frequency.

    [0028] The texture, and in particular, the texture's anisotropy, is of particular interest for characterizing images that represent biological tissue, such as a mammogram image 2. In such an image, the anisotropic character of an image's texture indicate the presence or risk of developing cancerous cells within that tissue.

    [0029] FIGS. 1B to 1D show other examples of images of the type that are common when inspecting a mammogram. FIG. 1B shows an image having isotropic texture. FIGS. 1C and 1D respectively show images the texture of which is isotropic and anisotropic and each of which includes an anisotropy caused by a second order polynomial tendency. In these images, that tendency is oriented in the horizontal direction.

    [0030] The image 2 is formed from a plurality of pixels. Each pixel is associated with a pixel intensity value and a position custom-character.sup.d in space, where d is a natural integer greater than or equal to 2 that represents the dimension of the image 2. In the particular example described herein, d=2.

    [0031] The pixels of the image 2 are thus disposed in space in the manner of a matrix (“lattice”) in the space custom-character.sup.d. The resolution of the image is preferably the same along all d axes of the image. Hereinafter, the set of the possible positions of the pixels of the image 2 is denoted [0,N].sup.d, where N is a vector that codes the size of the image and whose components are strictly positive natural integers belonging to custom-character.sup.d. This notation signifies that the coordinates n.sub.1, n.sub.2, . . . , n.sub.d of the position n of a pixel of the image respectively belong to the set [0,N.sub.1], [0,N.sub.2], . . . , [0,N.sub.d], where N.sub.1, N.sub.2, . . . N.sub.d are the coordinates of N. In the particular example, the image 2 is a square with dimensions (N.sub.1+1) (N.sub.2+1), where N.sub.1+1=N.sub.2+1 and N.sub.1+1 is the length of a side of that square expressed as a number of pixels. In this example, the image 2 is an area of interest that has been extracted from an image of larger size. In general, the sides of the image 2 have a length greater than or equal to 50 pixels, 100 pixels, or 500 pixels.

    [0032] In this example, the luminous intensity of the pixels is gray-scale encoded on 8 bits. The values of pixel intensity are integers in the range [0,255].

    [0033] FIG. 2 shows a device 12 for identifying and characterizing the anisotropy of an image's texture. The device 12 indicates whether an image 2 is isotropic or not. In some practices, the device 12 also quantifies, or characterizes, the extent of the anisotropy.

    [0034] The device 12 includes: a programmable electronic computer 14, an information-storage medium 16, and a digital-image acquisition interface 18 that enables the acquisition of the image 2. The digital image 2 typically comes from an electronic imaging device, which in the illustrated embodiment is a radiography device. The computer 14 executes the instructions stored on the medium 16 for executing the method described below in connection with FIGS. 3 and 4. This method identifies and characterizes image anisotropy using certain operations.

    [0035] In example described herein, the image 2 is modeled as the statistical realization of an intrinsic random Gaussian field. This means that the intensity value associated with each pixel of the image 2 corresponds to the realization of a Gaussian random variable Z. The intrinsic random Gaussian field is explained in more detail in: J. P. Chilès et al. “Geostatistics: Modeling Spatial Uncertainty,” J. Wiley, 2.sup.nd edition, 2012.

    [0036] Z[p] denotes an intensity value associated with a pixel in the image 2 whose position is given by the vector p. An orthonomic frame of reference is defined in custom-character.sup.d as having, as its origin, the null vector (0).sup.d.

    [0037] An embodiment of the method for automatic characterization of the anisotropy of the texture is described next with reference to the flowchart in FIG. 3 and with the aid of FIGS. 1 and 2.

    [0038] In an acquisition step 20, the image 2 is automatically acquired via the interface 18 and stored, for example, on the medium 16. Hereinafter this image 2 is designated by the notation “I”.

    [0039] In this example, with dimension d=2, a square matrix Z with dimensions (N.sub.1+1).Math.(N.sub.2+1) models the normalized image 2. The coefficients of this matrix Z are the Z[p] corresponding to the intensity of the pixels at position p. The components of the vector p give the position of that coefficient in the matrix Z. For example, Z[p] is the coefficient of the pith row and the p.sub.2.sup.th column of Z, where p.sub.1 and p.sub.2 are the coordinates of the position p in [0,N].sup.2.

    [0040] In a transformation step 22 that follows, geometrical transformations of the image 2 are applied to obtain a series of transformed images I.sub.j,k. These transformations include modifications T.sub.j,k, of the image 2, each of which includes a rotation by an angle α.sub.j and a change of scale by a scale factor γ.sub.k. T.sub.j,k(I) denotes the image obtained after the application of the modification T.sub.j,k to the acquired image I.

    [0041] Each modification T.sub.j,k is uniquely characterized by a vector u.sub.jk of the space custom-character.sup.2\{(0,0)}, such that α.sub.j=arg(u.sub.jk) and γ.sub.k=.sup.2. The space custom-character.sup.2\{(0,0)} is the space custom-character.sup.2 without the point with coordinates (0,0).

    [0042] The indices “j” and “k” are integers that respectively and uniquely identify the angle α.sub.j and the factor γ.sub.k. To simplify the notation, hereinafter “rotation j” and “change of scale k” respectively refer to a rotation by angle α.sub.j and to a change of scale by the factor γ.sub.k.

    [0043] The rotation j rotates each of the pixels of the image 2 by the angle α.sub.j from a starting position to an arrival position about the same point or the same predetermined axis. That point or that rotation axis typically passes through the geometrical center of the image. In the illustrated embodiment, rotation is relative to the geometrical center of the image, which is the barycenter of the positions of all of the pixels of the image, each weighted by a coefficient of the same value.

    [0044] The change of scale k enlarges or reduces the image by a homothetic factor γ.sub.k. In the examples given below, the homothetic center is the geometrical center of the image.

    [0045] These modifications T.sub.j,k are applied for at least two and preferably at least three or four different angles α.sub.j. The different values of the angles as are advantageously distributed as uniformly as possible between 0° and 180° while complying with the constraint that the vector u.sub.jk must belong to the space custom-character.sup.2\{(0,0)}. To limit the number of calculations to be performed, the number of different values for the angle α.sub.j is generally chosen so as not to be too large. In typical embodiments, this number is made less than twenty. In others, it is less than ten. A good compromise is to choose four different values for the angle α.sub.j. For each angle α.sub.j, modifications T.sub.j,k are applied for at least two and preferably at least three, four, or five different changes of scale γ.sub.k.

    [0046] The values of the factor γ.sub.k are, for example, greater than or equal to 1 and less than or equal to 10.sup.2 or 8.sup.2 or 4.sup.2. The different values of the factor γ.sub.k are preferably distributed as uniformly as possible across the chosen range of values.

    [0047] Some practices of the method include choosing the rotation angles α.sub.j as a function of the horizontal and vertical directions of the image 2. For example, to perform two rotations j1 and j2, one chooses values α.sub.j1=90° and α.sub.j2=180°, where j1 and j2 are particular values of the index j. By convention, the angles are expressed relative to the horizontal axis of the image 2.

    [0048] In the two-dimensional examples described herein, the modifications T.sub.j,k are given by:

    [00001] T j , k = γ k ( cos ( α j ) - sin ( α j ) sin ( α j ) cos ( α j ) )

    [0049] The transformation step 22 includes calculating K-increments for each of the transformed images T.sub.j,k(I). This calculation includes filtering that is intended to limit the tendencies of polynomial form of order strictly less than K. In particular, for each image T.sub.j,k(I), a filter is applied to make it possible to calculate the K-increment V.sub.j,k of that image T.sub.j,k(I). It is the K-increment of that image T.sub.j,k(I) that constitutes the transformed image I.sub.j,k. The K-increment V.sub.j,k of that image is not calculated for all the points of the image T.sub.j,k(I), but only for some of those points, as explained later.

    [0050] The K-increment is defined in more detail for example in: J. P. Chilès et al. “Geostatistics: Modeling Spatial Uncertainty,” J. Wiley, 2.sup.nd edition, 2012, the contents of which are herein incorporated by reference.

    [0051] The filtering is performed with a convolution kernel denoted “v” to apply linear filtering. Hereinafter “filter v” designates this convolution kernel.

    [0052] The filter v is defined over the set [0,L].sup.d. A characteristic polynomial Q.sub.v(z) characterizes filter v as follows:

    [00002] z d , Q v ( z ) = .Math. p [ 0 , L ] d .Math. v [ p ] .Math. z p

    [0053] In this example, the filter v designates a matrix, and v[p] designates a particular scalar value of this filter for the vector p, where p is a vector of [0,L].sup.d. This value of v[p] is zero if the value p does not belong to [0,L].sup.d. In an equivalent manner, the filter v is also said to feature limited support on [0,L].sup.d. This filter v is distinct from the null function, which has a null value v[b] for any value of the vector p. Here the notation z.sup.p designates the monomial z.sub.1.sup.p1.Math.z.sub.2.sup.p2.Math. . . . z.sub.d.sup.pd.

    [0054] The parameters of the filter v are therefore set by the [0,N].sup.d vector L. As a general rule, the vector L is chosen so as to be contained in the image I. Values of L are therefore preferably adopted that satisfy, for all i from 1 to d, the relation L.sub.i<<N.sub.i. For example, L.sub.i is less than 10 times or 100 times less than N.sub.i.

    [0055] Moreover, the filter v is such that its characteristic polynomial Q.sub.v(z) satisfies the following condition:

    [00003] a [ 0 , K ] d .Math. .Math. such .Math. .Math. that .Math. .Math. .Math. a .Math. K .Math. .Math. then .Math. a .Math. .Math. Q v z 1 a 1 .Math. .Math. .Math. .Math. .Math. z d a d .Math. ( 1 , .Math. .Math. , 1 ) = 0

    where the constant K is a non-zero natural integer and ∂.sup.|a|Q.sub.v/∂z.sub.1.sup.a1 . . . ∂z.sub.d.sup.ad is the partial derivative of the polynomial Q.sub.v(z) with respect to the components of the vector z, the symbol ∂z.sub.1.sup.ai indicating a differentiation of the polynomial Q.sub.v(z) of order a.sub.i with respect to the variable z.sub.i, where z.sub.i designates the i.sup.th component of the vector z and a.sub.i designates the i.sup.th component of the vector a, i being an integer index greater than or equal to 0 and less than or equal to d.

    [0056] The filtering of the image T.sub.j,k(I) by the filter v makes it possible to eliminate the effect of the tendency on the subsequent calculations of the method when the latter takes the form of a polynomial of order M, provided that the value of the constant K is chosen so that K≧M+1 if d is less than or equal to 4, and K≧M/2+d/4 if d>4.

    [0057] The K-increments of the image T.sub.j,k(I), denoted V.sub.j,k, are then calculated as follows using the filter v:

    [00004] V j , k [ m ] = .Math. p [ 0 , L ] d .Math. v [ p ] .Math. Z [ m - T j , k .Math. p ]

    [0058] where: [0059] V.sub.j,k[m] is a K-increment calculated on the image T.sub.j,k(I) for the pixel at position m, with m being a vector belonging to a set E defined hereinafter; [0060] the product T.sub.j,k.Math.p corresponds to the application of the modification T.sub.j,k to the pixel p of the image I and expresses the coordinates in custom-character.sup.d, after application of the modification T.sub.j,k, of the pixel initially at position p in the image I, [0061] v[p] is the value of the filter v for the value of p.

    [0062] For each image T.sub.j,k(I), the K-increment is calculated only on those pixels of the image T.sub.j,k(I) whose positions belong to a set E. The set E contains only positions that belong to the image I, and, regardless of the modification T.sub.j,k applied, occupy a position that already exists in the image I after application of this modification T.sub.j,k. The number of positions that belong to the set E is denoted n.sub.E.

    [0063] Moreover, for any position m belonging to the set E, the pixels at position “m−T.sub.j,k.Math.p” occupy a position contained in the image I.

    [0064] Accordingly, the quadratic variations are calculated only on the points of the transformed image for which no interpolation is necessary. It is therefore possible to have recourse to rotations j by any angle, in contrast to the situation with projections. In fact, if a projection is applied in a diagonal direction of the image, for example, projected points will have a position that does not belong to the set [0,N].sup.d. In other words, they are no longer part of the matrix. It is therefore necessary to have recourse to an interpolation to determine the intensity value associated with points that belong to that matrix. This introduces an approximation and therefore the possibility of an error. The reliability of the method is increased by the modification T.sub.j,k and thereafter the selection of the points of the set E.

    [0065] In the illustrated example, the filtering is effected within the same formula as the application of the modification T.sub.j,k.

    [0066] With this choice of the constant K, the filtering produces the increments V.sub.j,k[m] of order K. This filtering makes it possible to avoid taking into account an anisotropy of the image caused by the tendency, and to consider only the anisotropy of the texture of the underlying image, which in the present example, is an image of the mammary tissue. This results in improved reliability of the characterization method.

    [0067] The constant K must therefore be chosen as described above as a function of the nature of the tendency present in the image 2. Typically, in the case of a mammogram, the polynomial degree M of the tendency is less than or equal to 2. Step 22 thus includes acquiring a value of the vector L and a value of the constant K.

    [00005] v [ p ] = ( - 1 ) .Math. p .Math. × C L 1 p 1 × .Math. × C L d p d = ( - 1 ) .Math. p .Math. × .Math. i = 1 d .Math. .Math. L i ! p i ! × ( L i - p i ) !

    [0068] In this example, the filter v is chosen as follows: if the vector p belongs to [0,L].sup.d and v[p]=0 otherwise, where the terms C.sup.p.sub.L designate binomial coefficients.

    [0069] With this particular filter, the condition previously expressed on the characteristic polynomial Q.sub.v(z) is satisfied if K=|L|−1. Also, the value of K is deduced from the value of the parameter L that has been acquired.

    [0070] Then, in this particular instance, the filtering of the image T.sub.j,k(I) by the filter v makes it possible to eliminate the effect of the “tendency” if the latter has a polynomial form of order M provided that the parameter L is chosen such that |L|=M+2 if d is less than or equal to 4, and |L|=M/2+d/4+1 if d>4.

    [0071] In this two-dimensional example, with d=2, the vector L has two components, L.sub.1 and L.sub.2. To eliminate a tendency of order M=2, L.sub.1 and L.sub.2 must be chosen such that |L| is equal to four. The preferable choice is L.sub.1=4 and L.sub.2=0. In fact, by choosing values for the coordinates of the vector “L” sufficiently far apart, the filter can be made to have greater directional sensitivity. This greater sensitivity allows it to react in a more marked manner and therefore to filter more effectively those variations that are oriented in a particular direction. In contrast, a filter for which L.sub.1=2 and L.sub.2=2 would be less sensitive to a directional signal and would therefore be a less effective filter.

    [0072] During step 22, for each different value of j and k, the computer 14 successively applies the modification T.sub.j,k to the image 2 to obtain the image T.sub.j,k(I) and then applies the filter v to the image T.sub.j,k(I) to obtain the transformed image I.sub.j,k.

    [0073] Then, during a step 24, for each image I.sub.j,k, the p-variation W.sub.j,k associated with that image I.sub.j,k is calculated. The p-variation concept is well known to the person skilled in the art in the field [0074] of probability and statistics. [0075] It is given by the following

    [00006] W j , k [ m ] = 1 n E .Math. .Math. m E .Math. ( V j , k [ m ] ) q formula

    [0076] In the above equation, the symbol “q” has been used instead of the symbol “p” that is conventionally used in this equation. This is to prevent any confusion with the symbol “p” used herein to designate the position of a pixel. In this example, a particular form of the p-variations is used, namely the “quadratic variations” or “2-variations” for which q=2. The quadratic variation W.sub.j,k of the image I.sub.j,k is therefore calculated as follows from the K-increments calculated after filtering during step 22:

    [00007] W j , k [ m ] = 1 n E .Math. .Math. m E .Math. ( V j , k [ m ] ) 2

    These quadratic variations W.sub.j,k contain information that is important for the identification of the anisotropy. This information is extracted beginning with step 26.

    [0077] The next step is an analysis step 26 during which a covariance analysis, including a statistical regression, is effected on all the variations W.sub.j,k calculated for each of the images I.sub.j,k. This is carried out in order to interpolate the value of the Hurst exponent H of the image I and a term β.sub.j. The statistical regression is defined by the following equation:


    log(|W.sub.j,k|)=log(|u.sub.jk|.sup.2).Math.H+β.sub.j+ε.sub.j,k,

    where H is the Hurst exponent of the image I, which is a physical magnitude independent of the rotations of the image; β.sub.j is a quantity that does not depend on the change of scale k, this being analogous to an intercept parameter of the regression, except that it depends on the rotations j; and ε.sub.j,k is an error term of the regression, the statistical properties of which are predetermined and fixed by the user. In some practices, the user chooses the error terms ε.sub.j,k to be Gaussian random variables that are correlated with one another.

    [0078] A number n.sub.j of terms β.sub.j is therefore obtained, n.sub.j being the number of different rotations applied to the image I.

    [0079] For example, if the two rotations j1 and j2 described above suffice, the regression is effected on the basis of all the quadratic variations calculated for j1 and j2. This results in two terms β.sub.j1 and β.sub.j2.

    [0080] The analysis includes estimating the covariance of the quadratic variations, for example by a standard method applicable to stationary fields. The parameters are estimated using a generalized least-squares criterion. A method of this kind is described for example in: P. Hall et al., “Properties of nonparametric estimators of autocovariance for stationary random fields,” Probability Theory and Related Fields, vol. 99:3, p. 399-424, 1994.

    [0081] At this stage, the anisotropy is detected by testing the equality of the terms with one another to within a predetermined error margin.

    [0082] In fact, each of the terms β.sub.j encapsulates information relating to the texture in the direction inherent to the rotation j. If all the terms β.sub.j are equal to one another, this means that the texture does not depend on the orientation. This, of course, is what “isotropic” means. Otherwise, if the terms β.sub.j are not equal to one another, then the texture is anisotropic. During a testing step 28, the equality of the terms β.sub.j with one another to within a predefined value is tested. The image 2 can then be identified as being isotropic if an equality of this kind is found. Otherwise, it is identified as being anisotropic.

    [0083] A suitable test for use in the testing step 28 is a Fisher statistical test to test the validity of the following null hypothesis: “∀jε[1,n.sub.j], β.sub.j=β,” where β is a term that does not depend either on the rotations j or on the changes of scale k. This null hypothesis is tested against the alternative hypothesis “∃j.sub.1, j.sub.2 such that β.sub.j1≠β.sub.j2.” In a typical embodiment, the predetermined error margin is fixed by a confidence range of 95% or 99%.

    [0084] The p-value of the Fisher test amounts to a digital signature indicating the degree of anisotropy of the image. This makes it possible to assess whether the image is of low or high anisotropy. This, in turn, makes it possible to classify the image 2 relative to a reference anisotropy value or relative to other images. Typical practices thus include automatically collecting the test's p-value.

    [0085] FIG. 4's flowchart describes an application of the method for automatically classifying images 2 relative to one another as a function of their anisotropy.

    [0086] An initial acquisition step 30 includes automatically acquiring a plurality of images 2. A subsequent calculation step 31 includes automatically calculating a digital signal that is characteristic of the extent of the image's anisotropy. This is carried out using methods discussed above in connection with steps 20 to 28. In the illustrated embodiment, this signature is the p-value returned by the Fisher test at the end of the testing step 28. The p-value of the test is typically between zero and one, with the value “one” representing the lowest order of anisotropy and the value “zero” representing the highest order of anisotropy.

    [0087] A classification step 32 that follows includes automatically classifying the acquired images with respect to one another as a function of their anisotropy using the signatures calculated during the calculating step 31. The images are classified by increasing order of anisotropy, for example. A suitable classifier is a classifying algorithm based on neural networks or a support vector machine. However, other classifiers are possible.

    [0088] In some embodiments, the pixels of the image 2 have other intensity values. The intensity value of each pixel may be a real value. Or it may be greater than 256. In some cases, the image 2 is encoded in color. In such cases, the color image is separated into a plurality of monochrome images, each of which corresponds to a colorimetric channel that composes the color image. The method is then applied separately for each of these monochrome images.

    [0089] Although the embodiments described have shown a square image 2, the image need not be square. For example, in the two-dimension case, where d=2, the image can be rectangular or even trapezoidal. When the image does not have a regular shape, the “horizontal” and “vertical” directions are replaced by reference directions that are better suited to the image's geometry. For example, in the case of a triangular image, the base and the height of the triangle are useful reference directions. For higher-dimensional cases, the image 2 can be a hypercube of dimension d.

    [0090] Although only mammograms have been shown, the image 2 can be something other than a mammogram. For example, it may be a bone-tissue image. The anisotropy of the texture of the image would then provide information on the presence of bone pathologies such as osteoporosis.

    [0091] Other, wider fields of application may be envisaged, such as other types of biological tissues, air or satellite images, geological images, or images of materials. As a general rule, the method applies to any type of irregular and textured image such as an image obtained from any electronic imaging device.

    [0092] Other modifications may be used. For example, with dimension d=3 the modifications T.sub.j,k perform a rotation j about a given rotation axis and a change of scale k in a given direction of the image. The following modifications may be used, for example:

    [00008] T j , k = ( cos ( α j ) - sin ( α j ) 0 sin ( α j ) cos ( α j ) 0 0 0 γ k ) , or .Math. ( cos ( α j ) 0 - sin ( α j ) 0 γ k 0 sin ( α j ) 0 cos ( α j ) ) , or .Math. ( γ k 0 0 0 cos ( α j ) - sin ( α j ) 0 sin ( α j ) cos ( α j ) ) .

    [0093] The above modifications T.sub.j,k perform a rotation about an axis and a change of scale in a direction that is not parallel to that axis.

    [0094] The values of the angle α.sub.j may be different. Values of the angle α.sub.j are preferably chosen that do not necessitate interpolations.

    [0095] The values of the factor γ.sub.k may be different.

    [0096] Alternatively, the rotation and the change of scale are not applied at the same time.

    [0097] Other filters v can be used to calculate the K-increments. For example, it is quite possible to calculate the K-increments using any filter that has a convolution kernel that is defined as being equal to the convolution product of any convolution kernel v1, and a convolution kernel v2 equal to the kernel v described above.

    [0098] In the particular case where the kernel v1 is an identity matrix, the filter v described above is used again. Conversely, choosing a kernel v1 different from the identity matrix makes it possible to construct a large number of filters that are different from the filters v described above but that are nevertheless eminently suitable for calculating the K-increments.

    [0099] The filtering may be done differently during step 22. In particular, the transformation and the filtering are not necessarily applied simultaneously, but rather in separate formulas that are applied in separate steps.

    [0100] In alternative practices, all the transformations T.sub.j,k are first applied to the image I, after which the filters are applied to each of the images T.sub.j,k(I).

    [0101] The value of K may be different. In particular, with the filter v chosen in the example, if the image I has no tendency, in which case M=0, then |L|=2 will preferably be adopted or, if d>4, |L|=1+d/4.

    [0102] Alternatively, a plurality of filters v.sub.i are applied to each image T.sub.j,k(I) during step 22. The number of different filters v.sub.i applied to a given image T.sub.j,k(I) is denoted n.sub.i. In this case, the transformed image obtained by applying the filter v to the image T.sub.j,k(I) is denoted I.sub.i,j,k and V.sub.i,j,k[m] denotes the K-increment of that image at the position m in that image, where “i” is an index that uniquely identifies the filter v.sub.i applied. In such cases, the index “i” is different from the index “i” used before as a mute variable, notably with reference to the partial derivative of the polynomial Q.sub.v(z). In fact, it is then possible to calculate a plurality of quadratic variations for each image T.sub.j,k(I), one for each filter v.sub.i applied to that image T.sub.j,k(I). W.sub.i,j,k therefore denotes the quadratic variation calculated for the image I.sub.i,j,k.

    [0103] To achieve this, in some practices, step 22 includes an operation of selecting the filters v.sub.i, for example from a predefined library of filters.

    [0104] The quadratic variations W.sub.i,j,k are then calculated during step 24 using the following relation:

    [00009] W i , j , k [ m ] = 1 n E .Math. .Math. m E .Math. ( V i , j , k [ m ] ) 2

    [0105] During step 26, the regression is then effected as follows, taking account of the n.sub.i filters applied: log(|W.sub.i,j,k|)=log(|u.sub.jk|.sup.2).Math.H+β.sub.i,j+ε.sub.i,j,k, where: β.sub.i,j is the term β.sub.j associated with the filter v.sub.i; and ε.sub.i,j,k is the error term ε.sub.j,k associated with the filter v.sub.i.

    [0106] This results in obtaining a number n.sub.b of terms β.sub.i,j, where n.sub.b=n.sub.j.Math.n.sub.i, n.sub.j is the number of different rotations applied to the image I. During step 28, the null hypothesis of the Fisher test is formulated as follows: “∀jε[1,n.sub.j], β.sub.i,j=β.sub.i,” where β.sub.i is a term that does not depend on the rotations j or on the changes of scale k but does depend on the filters v.sub.i used.

    [0107] If a plurality of filters v.sub.i are used, the numbers of filters v.sub.i applied can vary from one image T.sub.j,k(I) to another, always provided that to a filter i there correspond at least two rotations j and, for each of those rotations j, at least two changes of scale k.

    [0108] It is also possible to carry out step 28 differently, for example by implementing the Fisher test differently. This might be implemented by choosing a different estimator or with other confidence ranges.

    [0109] Alternatively, it is possible to use other ways to test the equality of the terms during step 28. For example, one can test whether all the terms β.sub.j are equal to one another to within 2% or to within 5%. If so, then the texture is termed isotropic. Otherwise the texture is termed anisotropic. This test is then not a true statistical test. Instead, it is a test of strict equality.

    [0110] According to another example, the equality test is performed by using a Student statistical test to test inequality between the terms β.sub.j. For example, it is possible to compare all the terms β.sub.j two by two to determine if there exists j1, j2 such that β.sub.j1>β.sub.j2.

    [0111] It is also possible to perform the classification during step 32 differently. For example, is possible to choose the order of classification of the images differently.

    [0112] Having described the invention, and a preferred embodiment thereof, what is claimed as new, and secured by Letters Patent is: