Method for visualizing at least a zone of an object in at least one interface
20230222748 · 2023-07-13
Inventors
Cpc classification
G06T19/20
PHYSICS
G06T2207/20101
PHYSICS
G06T2219/2012
PHYSICS
G06T2219/028
PHYSICS
International classification
G06T19/20
PHYSICS
Abstract
The invention concerns a method implemented by computer means for visualizing at least a zone of an object in at least one interface, said method comprising the following steps: obtaining at least one image of said zone, said image comprising at least one channel, said image being a 2-dimensional or 3-dimensional image comprising pixels or voxels, a value being associated to each channel of each pixel or voxel of said image, a representation of said image being displayed in the interface, obtaining at least one annotation from a user, said annotation defining a group of selected pixels or voxels of said image, calculating a transfer function based on said selected pixels or voxels and applying said transfer function to the values of each channel of the image, updating said representation of the image in the interface, in which the colour and the transparency of the pixels or voxels of said representation are dependent on the transfer function.
Claims
1. Method implemented by computer means for visualizing at least a zone of an object in at least one interface, said method comprising the following steps: obtaining at least one image of said zone, said image comprising at least one channel, said image being a 2-dimensional or 3-dimensional image comprising pixels or voxels, a value being associated to each channel of each pixel or voxel of said image, a representation of said image being displayed in the interface, obtaining at least one annotation from a user, said annotation defining a group of selected pixels or voxels of said image, calculating a transfer function based on said selected pixels or voxels and applying said transfer function to the values of each channel of the image, updating said representation of the image in the interface, in which the colour and the transparency of the pixels or voxels of said representation are dependent on the transfer function, wherein said method comprises the following steps: obtaining at least one 2-dimensional or 2D image of said zone, said 2D image comprising pixels and at least one channel, a value being associated to each channel of each pixel of said 2-dimensional image, a representation of said 2D image being displayed in a first interface, obtaining at least one 3-dimensional or 3D image of said zone, said 3D image comprising voxels and at least one channel, a value being associated to each channel of each voxel of said 3D image, at least some of the voxels of the 3D image corresponding to some pixels of the 2D image, a representation of said 3D image being displayed in a second interface, obtaining at least one annotation from a user, said annotation defining a group of selected pixels of said 2D image or a group of selected voxels of said 3D image, propagating the selection of said group of pixels or voxels selected in the 2D or 3D image to the 3D or 2D image, respectively, by selecting the voxels or the pixels of said 3D or 2D image that corresponds to the selected pixels or voxels of said 2D or 3D image, respectively, calculating a first transfer function based on said selected pixels of said 2D image and applying said first transfer function to the values of each channel of the 2D image, updating the representation of the 2D image in the first interface, in which the colour and the transparency of the pixels of said representation are dependent on the first transfer function, calculating a second transfer function based on said selected voxels of said 3D image and applying said second transfer function to the values of each channel of the 3D image, updating the representation of the 3D image in the second interface, in which the colour and the transparency of the voxels of said representation are dependent on the second transfer function.
2. Method according to claim 1, wherein the group of selected pixels or voxels of the corresponding image is updated by obtaining at least one additional annotation from a user through the at least one interface.
3. Method according to claim 1, wherein at least one of the transfer function is calculated according to the following steps: selecting a first and a second domain of interest A and B, each domain comprising a group of pixels or voxels based on said selected pixels, creating a first feature tensor and a second feature tensor on the basis of pixels or voxels of the first and second domains of interest A and B, respectively, defining a statistical test that would differentiate the first domain A from the second domain B through the optimal Maximum Mean Discrepancy (MMD) of the statistics of features associated to the first domain A and the second domain B, defining, for each pixel or voxel of the image, the colour C of said pixel or voxel with the following equation:
4. Method according to claim 1, wherein at least one of the transfer function is calculated according to the following steps: selecting a first and a second domain of interest A and B, each domain comprising a group of pixels or voxels based on said selected pixels, creating a first feature tensor and a second feature tensor on the basis of the first and second domains of interest A and B, respectively, sample pixels or voxel in the first and second domains of interest A and B to have n.sub.A*=n.sub.B* and 2n.sub.A≤n.sub.max, where n.sub.A is the number of sample pixels or voxels in the first domain A and n.sub.B is the number of sample pixels or voxels in the second domain B and where n.sub.max is a predetermined value, defining, for each pixel or voxel of the image, the colour C of said pixel or voxel
5. Method according to claim 3, wherein each feature tensor defines, for each pixel of the corresponding domain of interest, at least one feature value selected from the following list of features: the value of the pixel or voxel v, ∇.sub.lv the regularised gradient (over scale l) of the pixel or voxel values, regularisartion is performed by Gaussian convolution ∇.sub.lv=∇(.sub.l*I) with
the Gaussian of null averaged value and standard deviation of l S.sub.l(v) the entropy of a patch of size l around pixel or voxel v, d(v)=(
.sub.l.sub.
.sub.l.sub.
.sub.l*I) the logarithmic derivative of the convolved image at pixel or voxel v, d.sub.p-UMAP,l,m(v) the low dimensional euclidean distance in the latent space generate by a parametric UMAP for a patch of size l and the surrounding patch of size l+m centred on pixel or voxel v. This feature is used on specialised solutions where a database of medical images has been assembled. A parametric UMAP has been trained on a predefined set of patches size on 2D image slices in the 3 main axes (Sagittal, Coronal and Axial). The training has been done to reduce the dimension of the patch to a 2D latent space. When evaluating the distance l and l+m are approximated as the closest value used for training. (r, θ).sub.p-UMAP(v) the polar coordinates of the pixel or voxel v on the latent space generate by a parametric UMAP centres on pixel or voxel v, S.sub.p-UMAP,l(v) the convex hull surface of the domain of size l around pixel v
6. Method according to claim 3, wherein said kernel k is selected from the following list of kernels:
7. Method according to claim 1, wherein the action generated on one of the first and second device is transmitted to the other device through at least one manager storing data into a memory, said data comprising at least one parameter representative of said action, the first representation and/or second representation being updated on the basis of the stored data and the set of original images.
8. Method according to claim 7, wherein each manager implements at least one or each of the following generic functions: export data from a manager data storage to a format readable outside of an application, import data from outside of the application to the manager data storage, receive data from at least one interface and store said data into the manager data storage and update all other interfaces with a visual readout of this data inclusion, remove data from at least one interface and remove said data from the manager data storage and update all other interfaces with a visual readout of this data removal, update the visual readout of a given interface with the addition of a new visual element, remove the visual readout of a given interface with the removal of an existing visual element.
9. Method according to claim 8, wherein each data element stored has a unique identifier associated to it which is used to ensure synchronization between each interface.
10. Method according to claim 1, wherein the representation of the 3D image is obtained through volume ray casting methods.
11. Method according to claim 1, wherein the first interface is displayed on a computer screen and the second interface is displayed on a computer screen and/or on a display of a virtual reality device.
12. A computer program, comprising instructions to implement at least a part of the method according to claim 1 when the software is executed by a processor.
13. Computer device comprising: input means for receiving at least one image of a zone of an object, a memory for storing at least instructions of a computer program according to preceding claim, a processor accessing to the memory for reading the aforesaid instructions and executing then the method according to claim 1, interface means for displaying the representation of the image obtained by executing said method.
14. A computer-readable non-transient recording medium on which a computer software is registered to implement the method according to claim 1, when the computer software is executed by a processor.
15. A method of generating a 3D model of a patient's anatomical structure wherein the method comprises: implementing by computer means the method according to claim 1 on a medical 3D-image of an object wherein the object is a patient's anatomical structure(s) comprising a zone of medical interest and the medical 3D-image is a magnetic image resonance (MRI) image, a Computed Tomography (CT) scan image, a Positron Emission Tomography (PET) scan image or a numerically processed ultrasound recordings image and displaying a 3D model of the patient's anatomical structure including the zone of medical interest.
16. The method according to claim 15, wherein a user provides at least one annotation in the medical 3D-image wherein the at least one annotation selects pixels or voxels in the zone of medical interest to improve vizualisation of said zone or/and visualization of the boundaries of the zone of interest and/or the at least one annotation selects pixels or voxels outside the zone of medical interest to enable image transformation by cropping or deletion of interfering structures such as surrounding tissues in the zone of interest, bones, muscles or blood vessels.
17. The method according to claim 15, which is either performed on raw 3D image imaging data or performed on segmented 3D image data.
18. A method of analyzing a 3D model obtained from of a medical 3D-image previously acquired from a patient in need of determination of his/her condition, disease or health status wherein the method comprises: implementing by computer means the method according to claim 1 on a medical 3D-image of an object wherein the object is a patient's anatomical structure(s) comprising a zone of medical interest and the medical 3D-image is a magnetic image resonance (MRI) image, a Computed Tomography (CT) scan image, a Positron Emission Tomography (PET) scan image or a numerically processed ultrasound recordings image and displaying a 3D model of the patient's anatomical structure including the zone of medical interest; analyzing the displayed 3D model in particular visualizing, tagging, manipulating and/or measuring metrics based on imaging data of the 3D model, thereby characterizing the patient's anatomical structure in the zone of medical interest.
19. A method of diagnosing or monitoring a patient's condition, disease or health status wherein the method comprises: implementing the method of claim 18, collecting the data relative to the metrics measured based on imaging data wherein the metrics enable mapping morphological, geometrical or position features of medical interest for the patient's condition, disease or health status.
20. The method according to claim 19 for the detection or monitoring of a tumor in a patient wherein the visualized or measured metrics are selected in the group of localization of the tumor in a specific organ or part thereof, determination of the number of lesions, position of the tumor relative to the contours of the affected anatomical structure or body part, in particular the affected organ, determination of the volume of the tumor or of the ratio of volumes respectively of the tumor and the residency volume of the affected organ.
Description
[0182] Other features, details and advantages will be shown in the following detailed description and on the figures, on which:
[0183]
[0184]
[0185]
[0186]
[0187]
[0188] As mentioned above, said 2D and 3D images can be obtained through the same volumetric image data, which may be multi-channel. 2D image may be a slice-based image of said volumetric image.
[0189] In the first and/or the second interface, a first user (e.g. a radiologist) and a second user (e.g. a surgeon) are able to select a group of pixels or voxels onto said 2D and 3D representations, through annotations. Such selection can be created and modified by both users as the annotations are updated, visible and able to be modified in both interfaces and representations.
[0190] A transfer function can be calculated for each interface or for only one interface (for example for the second interface), on the basis of said selection.
[0191] The transfer function is defined to optimize an objective function . A transfer function is defined as a mapping Φ:
.fwdarw.
with
the space of pixel or voxel features and
a set of function space (that depends in nature in the application) in which two functions are defined: T(v) the transparency function and C(v) the colour mapping function, with v the features associated to a pixel or a voxel.
[0192] In the below described embodiments or applications, we will define the objective function optimize and the procedure to obtain (T, C).
[0193] In a first embodiment or application, a statistical test is used to define the transfer function. More particularly, said test rely on the Maximal Mean Discrepancy (MMD).
[0194] The following list of kernels is defined:
[0195] where: [0196] x and x′ are features of the corresponding pixels or voxels [0197] {σ, l, p, σ.sub.b, σ.sub.v, α, γ} are hyper-parameters of the kernels, these parameters being predefined, or set automatically or by the user.
[0198] The above mentioned hyperparameters may be modified or set by the user through at least one interface.
[0199] Maximum Mean Discrepancy relies in the notion of embedding probability measures in a Reproducing Kernel Hilbert Space (RKHS). Embedding probability measures is a generalization of kernel methods which deal with embedding points of an input space as elements in an RKHS.
[0200] We have a probability P and a continuous positive definite real-valued kernel k ( to be the corresponding RKHS) defined on a separable topological space ξ, P is embedded into
as μ.sub.P=∫k(.,x)dP(x), called the kernel mean.
[0201] Based on the above embedding of P, we define a distance, the Maximum Mean Discrepancy (MMD), on the space of probability measures as the distance between the corresponding mean elements, i.e.,
MMD.sub.k(P,Q)=∥μ.sub.P−μ.sub.Q,∥
[0202] We define the regularised density the convolution of the histogram of features with the kernel K(x, x′)=k.sub.G(x−x′) as h(v)
[0203] While features can be adapted to specific application, common features associated to voxels for a stack l are: [0204] the value of the pixel or voxel v, [0205] ∇.sub.lv the regularised gradient (over scale l) of the pixel or voxel values, regularisartion is performed by Gaussian convolution ∇.sub.lv=∇(.sub.l*I) with
the Gaussian of null averaged value and standard deviation of l and I is the image stack [0206] S.sub.l(v) the entropy of a patch of size l around pixel or voxel v, [0207] d(v)=(
.sub.l.sub.
.sub.l.sub.
[0208] where (l.sub.1, l.sub.2) are the two scales associated to the Gaussian
[0209] I is the image stakc, [0210] σ.sub.l(v) the standard deviation of the patch of size l centred on pixel or voxel v, [0211] KL.sub.l,m(v) the Kullback-Liebler distance between the patch of size l and the surrounding patch of size l+m centred on pixel or voxel v, [0212] {tilde over (μ)}.sub.l(v) the median values of the voxels in the patch of size l centred on pixel or voxel v, [0213] ∇ log(.sub.l*I) the logarithmic derivative of the convolved image at pixel or voxel v, [0214] d.sub.p-UMAP,l,m(v) the low dimensional euclidean distance in the latent space generate by a parametric UMAP for a patch of size l and the surrounding patch of size l+m centred on pixel or voxel v. This feature is used on specialised solutions where a database of medical images has been assembled. A parametric UMAP has been trained on a predefined set of patches size on 2D image slices in the 3 main axes (Sagital, Coronal and Axial). The training has been done to reduce the dimension of the patch to a 2D latent space. When evaluating the distance l and l+m are approximated as the closest value used for training. [0215] (r,θ).sub.p-UMAP(v) the polar coordinates of the pixel or voxel v on the latent space generate by a parametric UMAP centres on pixel or voxel v, [0216] S.sub.p-UMAP,l(v) the convex hull surface of the domain of size l around pixel v
[0217] Depending on the expected runtime the features are evaluated in the 2D image or slice of interest (the slice where the radiologist decided to annotate the data) or full volumetric 3D image. It does not lead to change in the following algorithm.
[0218] At least one of the transfer function is calculated according to the following steps: [0219] selecting a first and a second domain of interest A and B, each domain comprising a group of pixels or voxels based on said selected pixels, [0220] creating a first feature tensor v.sub.A and a second feature tensor v.sub.B on the basis of pixels or voxels of the first and second domains of interest A and B, respectively. Each feature tensor v.sub.A or v.sub.B defines, for each pixel of the corresponding domain of interest A or B, at least one feature value selected from the above mentioned list of features. If a 16 bits graphics card is used, the total number of features of both feature tensor v.sub.A or v.sub.B may be up to 4 features. If a 32 bits graphics card is used, the total number of features of both feature tensor v.sub.A or v.sub.B may be up to 8 features.
[0221] v.sub.A or v.sub.B have respective size of (n.sub.A, m.sub.A) and (n.sub.B, m.sub.B) where
[0222] n.sub.A is the number of pixel or voxels of the domain A
[0223] m.sub.A is the number of features of each pixel or voxel of the domain A
[0224] n.sub.B is the number of pixel or voxels of the domain B
[0225] m.sub.B is the number of features of each pixel or voxel of the domain B
[0226] n.sub.A may be different from n.sub.B and m.sub.A may be different from m.sub.B. [0227] depending on machine limitation, defining n.sub.max, maximal number of pixels or voxels (A and B) to be considered [0228] choosing one kernel from the above predefined list of kernels and computing optimal MMD test to differentiate A and B using the following definition and equation:
[0229] We have the witness function
the empirical feature mean for P:
[0230] ϕ(x)=[ . . . ϕ.sub.i(x) . . . ] is the feature map ϕ(x)∈, k(x, x′)=
ϕ(x),ϕ(x′)
and
the class of functions. [0231] defining, for each pixel or voxel of the image, the colour C of said pixel or voxel with the following equation:
[0232] where f*(v) is the witness function of the value of the pixel or voxel v, defined by
where k is a kernel defining a value representative of the distance between the features associated to pixel or voxel x.sub.i belonging to domain A and y.sub.j and the features associated to the pixel or voxel v, with m the number of pixel or voxel in A and n the number of pixel or voxel in domain B, [0233] computing the colour C and T defined by the following equations:
[0234] where:
[0235] h.sub.A(v)=(k*ρ)(v) is the smoothed density, convolution of the kernel with the density of features, of the features associated to voxel v of the first domain A with the kernel k,
[0236] h.sub.B(v)=(k*ρ)(v) is the smoothed density, convolution of the kernel with the density of features, of the features associated to voxel v of the first domain B with the kernel k, and
[0237] Z.sub.A,B is a normalising constant ensuring that max(Z.sub.A,B)=c with c≤1 a predetermined constant defining the maximal transparency factor.
[0238] In a second embodiment or application, a probability distribution over features is used to define the transfer function.
[0239] The algorithm defines the colour of pixel or voxel from the product of a feature of interest and the shifted probability of the voxel to belong to one of the structure (A, B) performed by one shot learning.
[0240] Here, the learning is performed only on the data the radiologist and surgeons are working on. Learning is not performed from a dataset. Some features used for the one shot learning can be derived from inferences trained on specific datasets.
[0241] The following list of kernels is defined:
[0242] We define the binary classifier using a Gaussian process. We associate to the domain A the “label” y=+1 and to the domain B the “label” y=−1. We define the latent variables f(v) and the logistics function π(v)≡p(y=+1|v)=(f(v))
[0243] With the logistic function. We define V the set of tagged pixels or voxels (corresponding to domains (A, B) in the following)
[0244] We have for a pixel or voxel v.sub.*
[0245] where
[0246] At least one of the transfer function is calculated according to the following steps: [0247] selecting a first and a second domain of interest A and B, each domain comprising a group of pixels or voxels based on said selected pixels, [0248] defining on the computing platform the maximal number of pixels or voxels n.sub.max that can be used on the procedure [0249] defining the list of features of interest { . . . g.sub.i(v) . . . } [0250] creating a first feature tensor v.sub.A and a second feature tensor v.sub.B on the basis of pixels or voxels of the first and second domains of interest A and B, respectively. Each feature tensor v.sub.A or v.sub.B defines, for each pixel of the corresponding domain of interest A or B, at least one feature value selected from the above mentioned list of features for the first embodiment or application. [0251] v.sub.A or v.sub.B have respective size of (n.sub.A, m.sub.A) and (n.sub.B, m.sub.B) where [0252] n.sub.A is the number of pixel or voxels of the domain A [0253] m.sub.A is the number of features of each pixel or voxel of the domain A [0254] n.sub.B is the number of pixel or voxels of the domain B [0255] m.sub.B is the number of features of each pixel or voxel of the domain B [0256] n.sub.A may be different from n.sub.B and m.sub.A may be different from m.sub.B. [0257] sampling voxels or pixels for (A, B) to have n.sub.A*=n.sub.B* and 2n.sub.A*=n≤n.sub.max where:
[0258] n.sub.A* is the number of pixels or voxels sampled in the domain A
[0259] n.sub.B* is the number of pixels or voxels sampled in the domain B [0260] defining a classifier as a binary Gaussian process classifier (P, π) [0261] computing Laplace approximation [0262] computing the colour C and T defined by the following equations:
[0263] as the normalized product of the shifted probability for said pixel or voxel to belong to domain A by the value of one feature, g(v) of said pixel or voxel v. β is a predefined constant.
[0264] where:
[0265] h.sub.A(v)=(k*ρ)(v) is the smoothed density, convolution of the kernel with the density of features, of the features associated to voxel v of the first domain A with the kernel k,
[0266] h.sub.B(v)=(k*ρ)(v) is the smoothed density, convolution of the kernel with the density of features, of the features associated to voxel v of the first domain B with the kernel k, and
[0267] Z.sub.A,B is a normalising constant ensuring that max(Z.sub.A,B)=c with c≤1 a predetermined constant defining the maximal transparency factor.
[0268] In the software implementation the above mentioned procedure is run for the set of kernels defined above. Within at least one interface, the corresponding user can change the kernel. By default, the kernel k.sub.G may be displayed.
[0269] Such Bayesian approach allows better generalisations of the result if the pixels or voxels tagged are in very small numbers. If there are numerous tagged pixels or voxels, MMD (first embodiment or application) will allow faster calculations and higher efficiency than the Bayesian approach (second embodiment or application).