GEOMETRIC CALIBRATION METHOD AND APPARATUS OF COMPUTER TOMOGRAPHY
20220405970 · 2022-12-22
Inventors
Cpc classification
A61B6/584
HUMAN NECESSITIES
G06N7/01
PHYSICS
G06T11/005
PHYSICS
G06T7/80
PHYSICS
International classification
Abstract
A geometric calibration apparatus detects points from projection regions onto which markers disposed on a phantom are projected, and calculates an output vector representing a probability distribution that gives a probability with which each point is a projection of each marker, by inputting data corresponding to each point to a learning model. The geometric calibration apparatus extracts a predetermined number of samples based on the probability distribution, obtains a candidate projection matrix by transforming correspondences between markers determined based on the samples among the markers and points determined based on the samples among the points, calculates points into which the markers are transformed by the candidate projection matrix, calculates a difference between a set of the transformed points and a set of the detected points, and designates the candidate projection matrix as a projection matrix when the difference is less than or equal to a threshold.
Claims
1. A geometric calibration apparatus comprising: a memory configured to store one or more instructions; and a processor configured to, by executing the one or more instructions: detect a plurality of first points from a plurality of projection regions, respectively, a plurality of first markers disposed on a phantom being projected onto the projection regions in two dimensions by a computer tomography device; calculate an output vector representing a probability distribution that gives a probability with which each of the first points is a projection of each of the first markers, by inputting data corresponding to each of the first points to a learning model; extract a predetermined number of first samples based on the probability distribution; obtain a candidate projection matrix by transforming correspondences between second markers determined based on the first samples among the first markers and second points determined based on the first samples among the first points; calculate third points into which the first markers are transformed by the candidate projection matrix; calculate a difference between a first set of the third points and a second set of the first points; and designate the candidate projection matrix as a projection matrix in response to the difference being less than or equal to a threshold.
2. The geometric calibration apparatus of claim 1, wherein the processor is configured to transform each of the first points into a histogram, and wherein the data corresponding to each of the first points may be data obtained by being transformed into the histogram.
3. The geometric calibration apparatus of claim 1, wherein the processor is configured to extract the first samples by applying weighted random sampling to the probability distribution.
4. The geometric calibration apparatus of claim 1, wherein the processor is configured to: extract a plurality of second samples for the first points, respectively, from a third set including indices of the first markers; and extract the first samples from a fourth set including indices of the first points such that a probability with which each element of the fourth set is extracted is a normalized value of a probability with which a first point corresponding to said each element among the first points is a projection of a marker at a second sample for the first point among the second samples.
5. The geometric calibration apparatus of claim 4, wherein the processor is configured to extract the first samples and the second samples using a weighted random sampling technique.
6. The geometric calibration apparatus of claim 4, wherein the second markers are markers that have, as indices, second samples corresponding to the first samples among the second samples, among the first markers, and wherein the second points are points having the first samples as indices among the first points.
7. The geometric calibration apparatus of claim 1, wherein the processor is configured to repeat a process of extracting the first samples, obtaining the candidate projection matrix, calculating the second points, and calculating the difference, in response to the difference being greater than the threshold.
8. The geometric calibration apparatus of claim 1, wherein the processor is configured to: calculate a minimum-weight perfect matching for a bipartite graph in which the first set and the second set are set as vertex sets and an edge between a vertex of the first set and a vertex of the second set has a weight; and calculate a sum of weights obtained by the minimum-weight perfect matching as the difference.
9. The geometric calibration apparatus of claim 1, wherein the learning model is a model that has been trained by using training data that have, as input data, data corresponding to fourth points into which the first markers are respectively transformed by a randomly-generated projection matrix and have, as correct answer labels, values indicating classification of the first markers.
10. The geometric calibration apparatus of claim 9, wherein the data corresponding to the fourth points are data obtained by transforming the fourth points into a histogram.
11. The geometric calibration apparatus of claim 1, wherein the processor is configured to apply direct linear transformation to the correspondences to obtain the candidate projection matrix.
12. A geometric calibration method of a computer tomography device, the method comprising: detecting a plurality of points q′.sub.j(j=1, . . . , N′) from a plurality of projection regions, respectively, a plurality of first markers disposed on a phantom being projected onto the projection regions in two dimensions by the computer tomography device; calculating an output vector y.sub.j=(P.sub.1j, . . . , P.sub.Mj) having, as elements, a probability P.sub.ij(i=1, . . . , M) with which the q′.sub.j is a projection of each of the first markers, by inputting data corresponding to the q′.sub.j to a learning model; extracting samples s′.sub.k(k=1, . . . , 6) from a set Ω′={1, . . . , N′} based on the Y.sub.j; obtaining a candidate projection matrix by transforming correspondences between second markers determined based on the samples s′.sub.k among the first markers and points determined based on the samples s′.sub.k among the q′.sub.j; calculating points {circumflex over (q)}.sub.i into which the first markers are transformed by the candidate projection matrix; calculating a difference d({circumflex over (Q)}, Q′) between a set {circumflex over (Q)}={{circumflex over (q)}.sub.1, . . , {circumflex over (q)}.sub.M} of the {circumflex over (q)}.sub.i and a set Q′={q′.sub.1, . . . , q′.sub.N′} of the q′.sub.j; and designating the candidate projection matrix as a projection matrix in response to the d ({circumflex over (Q)}, Q′) being less than or equal to a threshold.
13. The method of claim 12, further comprising transforming the q′.sub.j into a histogram, wherein the data corresponding to the q′.sub.j are data obtained by being transformed into the histogram.
14. The method of claim 12, wherein extracting the s′.sub.k comprises: extracting samples s.sub.j∈Ω from a set Ω={1, . . . , M} such that a probability with which each element i of the set Ω is extracted for a given j is the P.sub.ij; normalizing a probability P.sub.s.sub.
15. The method of claim 14, wherein the α is a reciprocal of a sum of the P.sub.s.sub.
16. The method of claim 12, wherein calculating the d({circumflex over (Q)}, Q′) comprises: calculating a minimum-weight perfect matching for a bipartite graph in which the {circumflex over (Q)} and the Q′ are set as vertex sets and an edge between a vertex {circumflex over (q)}.sub.i of {circumflex over (Q)} and a vertex of q′.sub.j of Q′ has a weight w.sub.ij=||{circumflex over (q)}.sub.i−q′.sub.j||.sup.2; and calculating a sum of weights obtained by the minimum-weight perfect matching as the d({circumflex over (Q)}, Q′).
17. The method of claim 12, further comprising, in response to the d({circumflex over (Q)}, Q′) being greater than the threshold, repeating extracting the s′.sub.k, obtaining the candidate projection matrix, calculating the {circumflex over (q)}.sub.i, and calculating the d({circumflex over (Q)}, Q′).
18. A computer program executed by a computing device and stored in a non-transitory recording medium, the computer program causing the computing device to execute: detecting a plurality of first points from a plurality of projection regions, respectively, a plurality of first markers disposed on a phantom being projected onto the projection regions in two dimenions by a computer tomography device; calculating an output vector representing a probability distribution that gives a probability with which each of the first points is a projection of each of the first markers, by inputting data corresponding to each of the first points to a learning model; extracting a predetermined number of first samples based on the probability distribution; obtaining a candidate projection matrix by transforming correspondences between second markers determined based on the first samples among the first markers and second points determined based on the first samples among the plurality of first points; calculating third points into which the first markers are transformed by the candidate projection matrix; calculating a difference between a first set of the third points and a second set of the first points; and designating the candidate projection matrix as a projection matrix in response to the difference being less than or equal to a threshold.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
[0031]
[0032]
[0033]
DETAILED DESCRIPTION OF THE EMBODIMENTS
[0034] In the following detailed description, only certain embodiments of the present invention have been shown and described, simply by way of illustration. As those skilled in the art would realize, the described embodiments may be modified in various different ways, all without departing from the spirit or scope of the present invention. Accordingly, the drawings and description are to be regarded as illustrative in nature and not restrictive. Like reference numerals designate like elements throughout the specification.
[0035] As used herein, a singular form may be intended to include a plural form as well, unless the explicit expression such as “one” or “single” is used.
[0036] In flowcharts described with reference to the drawings, the order of operations or steps may be changed, several operations or steps may be merged, a certain operation or step may be divided, and a specific operation or step may not be performed.
[0037] First, a principle of geometric calibration to be used in a geometric calibration method according to an embodiment of the present invention is described with reference to
[0038]
[0039] Referring to
[0040] The X-ray imaging direction information is mathematically expressed as a projection matrix. The projection matrix is given as a 3×4 matrix of real components, and can also be called a camera matrix. Geometrically, the projection matrix represents mapping or transformation from a set of three-dimensional points to a set of two-dimensional points. That is, when the three-dimensional point p=(p.sub.x, p.sub.y, p.sub.z)∈.sup.3 is given, this point is transformed as in Equation 1 by the projection matrix A.
{tilde over (q)}=A{tilde over (p)} Equation 1
[0041] In Equation 1, {tilde over (p)} and {tilde over (q)} are homogeneous coordinates of points p and q, respectively. Equation 1 is explicitly written as Equation 2.
[0042] By equation 2, the two-dimensional point q∈.sup.2 is given as in Equation 3.
[0043] Accordingly, if the phantom marker p=p.sub.x, p.sub.y,p.sub.z) is projected onto q=(q.sub.u, q.sub.v) by the X-ray imaging, Equation 1 is expressed as in Equation 4 because the projection matrix A representing the X-ray imaging direction satisfies Equation 1.
A.sub.11p.sub.x+A.sub.12p.sub.y+A.sub.13p.sub.z+A.sub.14=q.sub.u{tilde over (q)}.sub.w
A.sub.21p.sub.x+A.sub.22p.sub.y+A.sub.23p.sub.z+A.sub.24=q.sub.v{tilde over (q)}.sub.w
A.sub.31p.sub.x+A.sub.32p.sub.y+A.sub.33p.sub.z+A.sub.34={tilde over (q)}.sub.w Equation 4
[0044] In equation 4, A.sub.ij∈ is an element in i.sup.th row and j.sup.th column of the 3×4 matrix A. Eliminating {tilde over (q)}.sub.w from Equation 4 gives Equation 5. That is, two equations for the projection matrix A can be obtained from the correspondence of p.Math.q.
P.sub.xA.sub.11+P.sub.yA.sub.12+p.sub.zA.sub.13+A.sub.14−q.sub.u(p.sub.xA.sub.31+p.sub.yA.sub.32+p.sub.zA.sub.33+A.sub.34)=0
P.sub.xA.sub.21+P.sub.yA.sub.22+p.sub.zA.sub.23+A.sub.24−q.sub.v(p.sub.xA.sub.31+p.sub.yA.sub.32+p.sub.zA.sub.33+A.sub.34)=0 Equation 5
[0045] If K correspondences p.sup.k.Math.q.sup.k(k=1, . . . , K) are given, 2K equations for the projection matrix A can be obtained from the K correspondences and given as in Equation 6 when expressed as a matrix equation.
a=0 Equation 6
[0046] In Equation 6, a is a column vector of size 12, expressed as in Equation 7, and is a matrix of size 2K×12 where a 2×12 partial matrix
.sub.k corresponding to (2k−1).sup.th and 2k.sup.th rows is given as in Equation 8.
[0047] From the above Equations, it can be observed that K must be equal to or greater than 6 (K≥6) in order to determine the solution a of Equation 6. The process of constructing the equation a=0 from the correspondences p.sup.k.Math.q.sup.k for the projection matrix A is called direct linear transformation (DLT).
[0048] One method for solving a=0 obtained by the DLT with respect to the unknown a may use, for example, singular value decomposition (SVD). That is, if the matrix
is decomposed as in Equation 9 by applying the SVD to the matrix
, a column of V corresponding to the smallest element among singular values appearing in diagonal matrix D becomes the solution of a. Rearranging a gives the projection matrix A.
=UDV.sup.T Equation 9
[0049] In Equation 9, U is an orthogonal matrix obtained by eigenvalue decomposition of AA.sup.T, V is an orthogonal matrix obtained by eigenvalue decomposition of A.sup.TA, D is a diagonal matrix having the singular values as diagonal elements, and A.sup.T is a transpose matrix of A.
[0050] A geometric calibration phantom may be designed for a geometric calibration method according to various embodiments of the present invention. The geometric calibration phantom design may mean defining spatial positions of a plurality of markers constituting the phantom. As described above, in the geometric calibration phantom design, M (M≥6) markers (e.g., 110 in
TABLE-US-00001 TABLE 1 Marker positions x y z 11.131 29.021 62.179 9.851 11.078 84.876 28.609 58.157 11.798 26.290 89.173 59.756 36.147 64.311 86.668 76.712 20.840 2.826 56.729 1.279 35.583 87.399 54.607 2.605 38.432 86.550 12.123 10.308 51.591 61.399 49.733 37.451 98.660 18.009 67.792 47.432 15.280 25.564 1.724 44.632 95.417 11.597 12.424 73.945 57.123 74.076 89.189 78.380
[0051] Next, for a learning apparatus and a learning model according to various embodiments of the present invention are described with reference to
[0052]
[0053] Referring to
[0054] In some embodiments, the learning model trained (generated) in the learning apparatus may be provided to a geometric calibration apparatus. The geometric calibration apparatus may perform geometric calibration based on the learning model. In some embodiments, the geometric calibration apparatus may be a computing device other than the learning apparatus. In some embodiments, the learning apparatus may perform functions of a geometric calibration apparatus.
[0055] In some embodiments, the computing device may be a desktop computer, a laptop computer, a server, or the like, but is not limited thereto and may be any kind of device having a computing function.
[0056] Although
[0057] The learning apparatus 200 may train a neural network (e.g., a convolutional neural network, CNN) by using training data set 210 including a plurality of training data 211. Each training data 211 may include data in which a correct answer 211a is labeled. In some embodiments, the learning apparatus 200 may train the neural network by predicting a value by performing the target task by inputting the training data 211 to the neural network and backpropagating a loss between the predicted value and the correct answer 211a labeled in the training data to the neural network.
[0058] In some embodiments, the learning apparatus 200 may use a neural network learning model using a set of two-dimensional coordinates Q={q.sub.1, . . . , q.sub.N} as the training data (i.e., inputs). In this case, a correct answer label of the training data may be a value indicating classification of a phantom marker in a geometric calibration phantom, for example, an index of the phantom marker.
[0059] Referring to
[0060] The learning apparatus randomly generates twelve real numbers and creates a 3×4 projection matrix A having the twelve real numbers as elements at S320. The learning apparatus determines whether the projection matrix A satisfies a predetermined constraint condition at S330. In some embodiments, the predetermined constraint conditions may be set by a user configurable. In some embodiments, the predetermined constraint condition may include, for example, a condition that the projection matrix is a matrix that satisfies a specification of a designed pattern. That is, since there is no need to train the learning model with the projection matrix out of the specification, the constraint condition may be set in order to train the learning model only with the projection matrix that satisfies the specification. When the projection matrix A does not satisfy the predetermined constraint condition, the learning apparatus performs the process of S320 again.
[0061] When the projection matrix A satisfies the predetermined constraint condition, the learning apparatus transforms each phantom marker p.sub.i∈.sup.3 into q.sub.i∈
.sup.2 through the projection matrix A at S340. In some embodiments, the learning apparatus may calculate a point q.sub.i to satisfy {tilde over (q)}.sub.i=A{tilde over (p)}.sub.i as described with reference to Equation 1 at S340. Next, the learning apparatus designates as in xƒ(q.sub.1, . . . , q.sub.M), y.fwdarw.(1, . . . , M) at S340. In some embodiments, the tuple x may be a set of two-dimensional coordinates of the points onto which the phantom markers are projected and correspond to data used as an input. The tuple y may correspond to data used as correct answer labels and be values indicating classification (e.g., indices) of the phantom markers.
[0062] In some embodiments, the learning apparatus may perform data augmentation on the training data at S350. The learning apparatus may generate augmented training data by changing x and y based on any one or a combination of two or more among various data augmentation techniques. In some embodiments, the learning apparatus may select any one or the combination of two or more among the various data augmentation techniques with a probability. In some embodiments, the various data augmentation techniques may include, for example, a technique of randomly selecting j to remove the j-th elements in x and y, a technique of randomly generating q∈.sup.2 to add it as the last element of x and adding (N+1) as the last elements of y (N is the number of elements in each of x and y before the change), and a technique of randomly generating permutation σ and changing ordering of the elements in x and y according to the permutation σ. According to the third technique, it may be designated as in xƒ(q.sub.σ(1), . . . , q.sub.σ(N)), yƒ(σ(1), . . . , σ(N)).
[0063] In some embodiments, the learning apparatus may transform each element of x obtained in the process of S340 or S350 into a histogram at S360. The x and y obtained in the process of S340 or S350 may be expressed as in x=(x.sub.1, . . . , x.sub.N), y=, (y.sub.1, . . . , y.sub.N). The learning apparatus may transform each element of x.sub.j∈.sup.2 of x into the histogram H.sub.j=H(x.sub.j; x) to facilitate training of a neural network model. The histogram transformation H(x.sub.j; x) may be defined in several ways to quantify spatial features of x.sub.j in relation to other points x. In some embodiments, the histogram transformation may be computed as in Equation 10 using shape context. Various histogram transformation techniques including, for example, “Shape context: A new descriptor for shape matching and object recognition” (Belongie, S. et al., The technique presented in Advances in neural information processing systems, 13, 831-837) may be used. The learning apparatus may transform x into the histogram by binding each element of Equation 10 and designating it as Hƒ(H.sub.1, . . . , H.sub.N).
H.sub.j(k)=#{j′≈j:x.sub.j′−x.sub.j∈bin(k)} Equation 10
[0064] The learning apparatus adds N training data items consisting of (H.sub.j, y.sub.j) generated in the process of S360 to the training data set at S370. That is, the learning apparatus may update the training data set D as in Equation 11.
D←D∪{(H.sup.1,y.sub.1), . . . (H.sub.N,y.sub.N)} Equation 11
[0065] If sufficient training data items are added to the training data set at S380, the learning apparatus ends the generation of training data. If the sufficient training data items are not added, the learning apparatus repeats the process from S320. In some embodiments, the learning apparatus may determine that the sufficient training data items have been added when the number of elements (e.g., training data) of the training data set reaches a threshold.
[0066] Through these processes, the learning apparatus may generate the training data set D and store the generated training data set D in a storage device.
[0067] Next, referring to
[0068] Two-dimensional input data (e.g., data transformed into a histogram) 430 of the training data is input to the feature extraction layer 410. The feature extraction layer 410 extracts features from the input data 430. Such the feature may be expressed in the form of a feature map.
[0069] In some embodiments, the feature extraction layer 410 may include a plurality of layers. A first layer among the plurality of layers may receive the input data and extract the feature map from the input data. The other layers may extract the feature map again from the feature map transferred by the previous layer and transfer it to the next layer. In some embodiments, each layer may extract the feature map by applying an operation of the corresponding layer to the input data or the input feature map. In some embodiments, the feature extraction layer 410 may include a convolution layer. The convolution layer may extract the feature map by applying a convolution filter to the input data or the input feature map.
[0070] In some embodiments, the feature extraction layer 410 may further include other layers such as a pooling layer and an activation layer. The pooling layer can extract features by performing a pooling operation on input features.
[0071] The output layer 420 predicts a predicted value 440 according to the target task based on the feature map extracted from the feature extraction layer 410. In some embodiments, the predicted value may be given as an M-dimensional vector y indicating values indicating classification (e.g., indices) of the phantom markers predicted by the target task.
[0072] In some embodiments, the output layer 420 may include a fully-connected layer that performs classification from the feature map output through the feature extraction layer 410. In some embodiments, a plurality of fully-connected layers may be provided.
[0073] A loss 450 between the predicted value 440 of the output layer 420 and a correct answer label of the input data may be calculated and be backpropagated to the neural network 400 so that the neural network 400 can be trained.
[0074] After repeatedly training the learning model using the training data of the training data set, the learning apparatus may store the learning model in the storage device. Next, the geometric calibration apparatus may perform geometric calibration by generating a projection matrix based on the learning model learned by the learning apparatus.
[0075] Next, a geometric calibration method according to various embodiments of the present invention is described with reference to
[0076]
[0077] A geometric calibration apparatus detects projection regions (e.g., 150 in (j=1, . . . , N′). For example, as shown in
[0078] The geometric calibration apparatus transforms the detected point q′.sub.j and inputs data corresponding to the detected point q′.sub.j to the learning model at S520 and S530. In some embodiments, the geometric calibration apparatus may transform the detected point q′.sub.j into a histogram H′.sub.j; at S520. In some embodiments, the geometric calibration apparatus may designate as in xƒ(q′.sub.1, . . . , q′.sub.N′) and calculate the histogram H′.sub.j=H(q′.sub.j; x), for example, based on Equation 10. The geometric calibration apparatus may calculate an output vector y.sub.j=(P.sub.1j, . . . , P.sub.Mj) by inputting the calculated histogram H′.sub.j to the learning model at S530. The geometric calibration apparatus may perform an operation of transforming the detected point q′.sub.j into the histogram H′.sub.j for each j(=1, . . . , N′) and calculating the output vector y.sub.j=(P.sub.1j, . . . , P.sub.Mj), for each j(=1, . . . , N′), at S520 and S530.
[0079] When there is a set of two-dimensional points {q′.sub.1, . . . , q′.sub.N′}, the output vector y.sub.j may be given as an M-dimensional vector (P.sub.1j, . . . , P.sub.Mj) if the point q′.sub.j is transformed into the histogram and input to the learning model. Here, P.sub.ij(i=1, . . . , M) means a probability with which the point q′.sub.j is a projection of the phantom marker p.sub.i. The output vector y.sub.j may be expressed as, for example, a probability distribution as shown in
[0080] In some embodiments, the geometric calibration apparatus may extract samples s.sub.j∈Ω (referred to as “second samples”) from a set Ω={1, . . . , M} such that a probability with which each element i of the set Ω is extracted for a given j is a probability P.sub.ij. In some embodiments, the geometric calibration apparatus may extract the sample s.sub.j using the weighted random sampling technique in order to achieve sampling according to the probability distribution. In this case, the geometric calibration apparatus may perform an operation of extracting the sample s.sub.j for each j(=1, . . . , N′). For example, when M=9 and N′=9, the samples s.sub.j may be extracted as shown in Table 2. In some embodiments, the weighted random sampling technique may allow the samples s.sub.j of a different pattern to be extracted the next time the sample s.sub.j is extracted.
TABLE-US-00002 TABLE 2 i j = 1 j = 2 j = 3 j = 4 j = 5 j = 6 j = 7 j = 8 j = 9 1 P.sub.11 P.sub.12 P.sub.13 P.sub.14 P.sub.15 P.sub.16 P.sub.17 P.sub.18 P.sub.19 2 P.sub.21 P.sub.22 P.sub.23 P.sub.24 P.sub.25 P.sub.26 P.sub.27 P.sub.28 P.sub.29 3 P.sub.31 P.sub.32 P.sub.33 P.sub.34 P.sub.35 P.sub.36 P.sub.37 P.sub.38 P.sub.39 4 P.sub.41 P.sub.42 P.sub.43 P.sub.44 P.sub.45 P.sub.46 P.sub.47 P.sub.48 P.sub.49 5 P.sub.51 P.sub.52 P.sub.53 P.sub.54 P.sub.55 P.sub.56 P.sub.57 P.sub.58 P.sub.59 6 P.sub.61 P.sub.62 P.sub.63 P.sub.64 P.sub.65 P.sub.66 P.sub.67 P.sub.68 P.sub.69 7 P.sub.71 P.sub.72 P.sub.73 P.sub.74 P.sub.75 P.sub.76 P.sub.77 P.sub.78 P.sub.79 8 P.sub.81 P.sub.82 P.sub.83 P.sub.84 P.sub.85 P.sub.86 P.sub.87 P.sub.88 P.sub.89 9 P.sub.91 P.sub.92 P.sub.93 P.sub.94 P.sub.95 P.sub.96 P.sub.97 P.sub.98 P.sub.99 s.sub.1 = 6 s.sub.2 = 3 s.sub.3 = 8 s.sub.4 = 1 s.sub.5 = 4 s.sub.6 = 9 s.sub.7 = 7 s.sub.8 = 2 s.sub.9 = 5
[0081] Next, the geometric calibration apparatus may extract samples {s′.sub.1, . . . , s′.sub.6}.Math.Ω′ through sampling without replacement such that a probability with which each element j of a set Ω′={1, . . . , N′} is extracted from the set Ω′ is αP.sub.s.sub.
TABLE-US-00003 TABLE 3 j s.sub.j P 1 6 αP.sub.61 2 3 αP.sub.32 3 8 αP.sub.83 4 1 αP.sub.14 5 4 αP.sub.45 6 9 αP.sub.96 7 7 αP.sub.77 8 2 αP.sub.28 9 5 αP.sub.59 s′.sub.1 = 1, s′.sub.2 = 2, s′.sub.3 = 3, s′.sub.4 = 5, s′.sub.5 = 7, s′.sub.6 = 9
[0082] Next, the geometric calibration apparatus creates correspondences between phantom markers determined based on the extracted samples among the plurality of phantom markers and points determined based on the extracted samples among the detected points, and transforms the correspondences to obtain a candidate projection matrix  at S560. In some embodiments, the geometric calibration apparatus may obtain the candidate projection matrix  by designating i(k)=s.sub.s′.sub.
TABLE-US-00004 TABLE 4 k p.sub.i(k) q′.sub.j(k) 1 p.sub.6 q′.sub.1 2 p.sub.3 q′.sub.2 3 p.sub.8 q′.sub.3 4 p.sub.4 q′.sub.5 5 p.sub.7 q′.sub.7 6 p.sub.5 q′.sub.9
[0083] The geometric calibration apparatus calculates points {circumflex over (q)}.sub.i into which the markers p.sub.i are transformed by the candidate projection matrix Â, and calculates a difference d({circumflex over (Q)}, Q′) between a set of transformed points {circumflex over (Q)}={{circumflex over (q)}.sub.1, . . . , {circumflex over (q)}.sub.M} and a set of the detected points Q′={q′.sub.1, . . . , q′.sub.N′} at S570. The geometric calibration apparatus may calculate the difference between the two sets by various techniques. In some embodiments, as shown in
[0084] If the calculated difference d(({circumflex over (Q)},Q′) is less than or equal to a threshold (d≤d.sub.0) at S580, the geometric calibration apparatus designates the candidate projection matrix  as the projection matrix A at S590. If the calculated difference d({circumflex over (Q)},Q′) is greater than the threshold (d>d.sub.0) at S580, the geometric calibration apparatus may discard the candidate projection matrix  and repeat the process from S540.
[0085] As described above, according to some embodiments, the geometric calibration phantom can be designed with almost no constraints, and additional information other than the phantom design is not required during geometric calibration. Accordingly, the user can arbitrarily dispose the phantom markers on the space as needed. According to some embodiments, the geometric calibration phantom can be designed with only one type of phantom markers, without needing, for example, to adopt different sizes to allow the phantom markers to be distinguished from each other. According to some embodiments, the geometric calibration can be achieved by using only the phantom marker position information without needing to know approximate information about the imaging directions in advance.
[0086] In some embodiments, the phantom design and fabrication, and training of the learning model can be performed only once for each phantom, as a pre-work for the geometric calibration. Based on the phantom and the learning model, the geometric calibration can be performed for each X-ray image acquired by a computer tomography device.
[0087] Hereinafter, an example computing device for implementing a learning apparatus or a geometric calibration apparatus according to various embodiment of the present invention is described with reference to
[0088]
[0089] Referring to
[0090] The processor 910 controls an overall operation of each component of the computing device. The processor 910 may be implemented with at least one of various processing units such as a central processing unit (CPU), an application processor (AP), a microprocessor unit (MPU), a micro controller unit (MCU), and a graphic processing unit (GPU), or may be implemented with parallel processing units. In addition, the processor 910 may perform operations on a program for executing the above-described training data generation method, learning method, or geometric calibration method.
[0091] The memory 920 stores various data, instructions, and/or information. The memory 920 may load a computer program from the storage device 930 to execute the above-described training data generation method, learning method, or geometric calibration method. The storage device 930 may non-temporarily store a program. The storage device 930 may be implemented as a non-volatile memory.
[0092] The communication interface 940 supports wired or wireless Internet communication of the computing device. In addition, the communication interface 940 may support various communication methods other than Internet communication.
[0093] The bus 950 provides a communication function between the components of the computing device. The bus 950 may be implemented as various types of buses, such as an address bus, a data bus, and a control bus.
[0094] The computer program may include instructions for causing the processor 910 to execute the training data generation method, the learning method, or the geometric calibration method when loaded into the memory 920. That is, the processor 910 may perform the training data generation method, the learning method, or the geometric calibration method by executing the instructions.
[0095] In some embodiments, the computer program may include one or more instructions of detecting a plurality of first points from a plurality of projection regions, respectively, a plurality of first markers disposed on a phantom being projected onto the projection regions in two dimensions by a computer tomography device, calculating an output vector representing a probability distribution that gives a probability with which each of the first points is a projection of each of the first markers, by inputting data corresponding to each of the first points to a learning model, extracting a predetermined number of first samples based on the probability distribution, obtaining a candidate projection matrix by transforming correspondences between second markers determined based on the first samples among the first markers and second points determined based on the first samples among the plurality of first points, calculating third points into which the first markers are transformed by the candidate projection matrix, calculating a difference between a first set of the third points and a second set of the first points, and designating the candidate projection matrix as a projection matrix in response to the difference being less than or equal to a threshold.
[0096] In some embodiments, the computer program may include one or more instructions of generating training data that have, as input data, data corresponding to fourth points into which the first markers are respectively transformed by a randomly-generated projection matrix and have, as correct answer labels, values indicating classification of the first markers.
[0097] In some embodiments, a computer program may include one or more instructions for training the learning model using the training data.
[0098] In some embodiments, the computer program may include one or more instructions of detecting a plurality of points q′.sub.j (j=1, . . . , N′) from a plurality of projection regions, respectively, a plurality of first markers disposed on a phantom being projected onto the projection regions in two dimensions by the computer tomography device, calculating an output vector y.sub.j=(P.sub.1j, . . . , P.sub.Mj) having, as elements, a probability P.sub.ij (i=1, . . . , M) with which the q′.sub.j is a projection of each of the first markers, by inputting data corresponding to the q′.sub.j to a learning model, extracting samples s′.sub.k (k=1, . . . , 6) from a set Ω′={1, . . . , N′} based on the y.sub.j, obtaining a candidate projection matrix by transforming correspondences between second markers determined based on the samples s′.sub.k among the first markers and points determined based on the samples s′.sub.k among the q′.sub.j, calculating points {circumflex over (q)}.sub.i into which the first markers are transformed by the candidate projection matrix, calculating a difference d({circumflex over (Q)}, Q′) between a set {circumflex over (Q)}={{circumflex over (q)}.sub.1, . . . , {circumflex over (q)}.sub.M} of the {circumflex over (q)}.sub.i and a set Q′={q′.sub.1, . . . , q′.sub.N′,} of the q′.sub.j, and designating the candidate projection matrix as a projection matrix in response to the d({circumflex over (Q)}, Q′) being less than or equal to a threshold.
[0099] The training data generation method, the learning method or the geometric calibration method according to various embodiments may be implemented as a computer-readable program on a computer-readable medium. In one embodiment, the computer-readable medium may include a removable recording medium or a fixed recording medium. In another embodiment, the computer-readable program recorded on the computer-readable medium may be transmitted to another computing device via a network such as the Internet and installed in another computing device, so that the computer program can be executed by another computing device.
[0100] While this invention has been described in connection with what is presently considered to be practical embodiments, it is to be understood that the invention is not limited to the disclosed embodiments. On the contrary, it is intended to cover various modifications and equivalent arrangements included within the spirit and scope of the appended claims.