Calibration method for a spectral computerized tomography system
11263792 · 2022-03-01
Assignee
Inventors
- Edward Steven Jimenez, Jr. (Albuquerque, NM, US)
- Srivathsan Prabu Koundinyan (Highland Park, NJ, US)
- Isabel Gallegos (Albuquerque, NM, US)
- Adriana Stohn (Chandler, AZ, US)
- Gabriella Dalton (Albuquerque, NM, US)
Cpc classification
G06T11/005
PHYSICS
A61B6/586
HUMAN NECESSITIES
A61B6/5205
HUMAN NECESSITIES
G06T11/006
PHYSICS
International classification
A61B6/00
HUMAN NECESSITIES
Abstract
A calibration method for an x-ray computerized tomography system and a method of tomographic reconstruction are provided. The calibration method includes steps of measuring at least one point spread function (PSF) at each of a plurality of points, compressing each PSF, and in one or more storing operations, storing the compressed PSFs in a computer-accessible storage medium. The PSF measurements are made in a grid of calibration points in a field of view (FOV) of the system. In the measuring step, an absorber is positioned at each of the calibration points, and an x-ray projection is taken at least once at each of those absorber positions. In the method of tomographic image reconstruction, projection data from an x-ray tomographic projection system are input to an iterative image reconstruction algorithm. The algorithm retrieves and utilizes a priori system information (APSI) The APSI comprises comprising point spread functions (PSFs) of all voxels in a voxelization of the field of view that are compressed in the form of vectors of parameters. For utilization, each retrieved vector of parameters is decompressed so as to generate a discretized PSF.
Claims
1. A calibration method for an x-ray computerized tomography system, comprising: measuring at least one point spread function (PSF) at each of a plurality of points; compressing each PSF; and in one or more storing operations, storing the compressed PSFs in a computer-accessible storage medium, wherein: the PSF measurements are made in a grid of calibration points in a field of view (FOV) of the system; the measuring step comprises positioning an absorber at each of the calibration points and taking an x-ray projection at least once at each said absorber position; and at each calibration point of the grid, respective PSFs are measured in a plurality of x-ray energy bins; wherein compressing each PSF comprises fitting each PSF to an ellipse spline.
2. The method of claim 1, wherein the measuring step comprises performing a scan at each calibration point, each scan comprising multiple x-ray projections taken over a range of projection angles.
3. The method of claim 1, wherein the measuring of a PSF comprises projecting two or more absorbers having different radii so as to generate a graduated set of two or more respective projections, and wherein the compressing of a PSF comprises extrapolating parameter values from its graduated set of projections.
4. The method of claim 1, further comprising calculating imputed PSFs for a plurality of points in the FOV that are additional to the calibration points by interpolation of measured PSFs, and wherein the one or more storing operations comprise storing the imputed PSFs in compressed form.
5. The method of claim 1, wherein: the one or more storing operations comprise storing respective scans for multiple points in the FOV; each scan comprises multiple projections of an absorber taken at a fixed absorber position but at multiple projection angles; and storing a scan comprises storing a compressed PSF for each respective projection angle.
6. The method of claim 5, further comprising computing imputed PSFs for at least some points in the FOV by interpolation of measured PSFs, and wherein at least some of the stored scans include imputed PSFs.
7. The method of claim 1, wherein the compressed PSFs are stored in a matrix of entries arranged in rows and columns, and wherein each entry corresponds to a vector of parameter values representing a respective compressed PSF.
8. The method of claim 1, wherein: the system has a spatial resolution; the one or more storing operations comprise storing at least one compressed PSF for every point in a voxelization grid within the FOV; and the voxelization grid has a pitch of at least the system spatial resolution.
9. A calibration method for an x-ray computerized tomography system, comprising: measuring at least one point spread function (PSF) at each of a plurality of points; compressing each PSF; and in one or more storing operations, storing the compressed PSFs in a computer-accessible storage medium, wherein: the PSF measurements are made in a grid of calibration points in a field of view (FOV) of the system; the measuring step comprises positioning an absorber at each of the calibration points and taking an x-ray projection at least once at each said absorber position; and at each calibration point of the grid, respective PSFs are measured in a plurality of x-ray energy bins; wherein compressing each PSF comprises representing each PSF by a vector of parameters including a scale parameter, a translational parameter, and at least one shape parameter.
10. The method of claim 9, wherein the measuring of a PSF comprises projecting two or more absorbers having different radii so as to generate a graduated set of two or more respective projections, and wherein the compressing of a PSF comprises extrapolating parameter values from its graduated set of projections.
11. A calibration method for an x-ray computerized tomography system, comprising: measuring at least one point spread function (PSF) at each of a plurality of points; compressing each PSF; and in one or more storing operations, storing the compressed PSFs in a computer-accessible storage medium, wherein: the PSF measurements are made in a grid of calibration points in a field of view (FOV) of the system; the measuring step comprises positioning an absorber at each of the calibration points and taking an x-ray projection at least once at each said absorber position; and each compressed PSF consists of at most four parameters of a spline fit.
12. The method of claim 11, wherein compressing each PSF comprises fitting each PSF to an ellipse spline.
13. The method of claim 11, wherein compressing each PSF comprises representing each PSF by a vector of parameters including a scale parameter, a translational parameter, and at least one shape parameter.
14. The method of claim 11, wherein the measuring step comprises performing a scan at each calibration point, each scan comprising multiple x-ray projections taken over a range of projection angles.
15. The method of claim 11, wherein the measuring of a PSF comprises projecting two or more absorbers having different radii so as to generate a graduated set of two or more respective projections, and wherein the compressing of a PSF comprises extrapolating parameter values from its graduated set of projections.
16. The method of claim 11, further comprising calculating imputed PSFs for a plurality of points in the FOV that are additional to the calibration points by interpolation of measured PSFs, and wherein the one or more storing operations comprise storing the imputed PSFs in compressed form.
17. The method of claim 11, wherein: the one or more storing operations comprise storing respective scans for multiple points in the FOV; each scan comprises multiple projections of an absorber taken at a fixed absorber position but at multiple projection angles; and storing a scan comprises storing a compressed PSF for each respective projection angle.
18. The method of claim 17, further comprising computing imputed PSFs for at least some points in the FOV by interpolation of measured PSFs, and wherein at least some of the stored scans include imputed PSFs.
19. The method of claim 11, wherein the compressed PSFs are stored in a matrix of entries arranged in rows and columns, and wherein each entry corresponds to a vector of parameter values representing a respective compressed PSF.
20. The method of claim 11, wherein: the system has a spatial resolution; the one or more storing operations comprise storing at least one compressed PSF for every point in a voxelization grid within the FOV; and the voxelization grid has a pitch of at least the system spatial resolution.
21. A calibration method for an x-ray computerized tomography system, comprising: measuring at least one point spread function (PSF) at each of a plurality of points; compressing each PSF; and in one or more storing operations, storing the compressed PSFs in a computer-accessible storage medium, wherein: the PSF measurements are made in a grid of calibration points in a field of view (FOV) of the system; the measuring step comprises positioning an absorber at each of the calibration points and taking an x-ray projection at least once at each said absorber position; the measuring of a PSF comprises projecting two or more absorbers having different radii so as to generate a graduated set of two or more respective projections; and the compressing of a PSF comprises extrapolating parameter values from its graduated set of projections.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
DETAILED DESCRIPTION
(17) A typical projection system is schematically illustrated in plan view in
(18)
(19) The pixel values measured by the detector array (which in this example is assumed to be a linear array of detector elements) are plotted, as shown in view 205, as response versus position in the detector array. As shown in view 210, the resulting profile is also graphed, as intensity versus position, in a corresponding row of the sinogram. Many such projections are taken at successive increments of the projection angle. The relative rotations between the x-ray source and the FOV as the projection angle is incremented are indicated symbolically in
(20) The projection at each respective projection angle adds a corresponding row to the sinogram. Thus, the complete sinogram is a record of a full scan of the object in the FOV. In a multispectral (H-CT) system, energy discrimination is possible, and accordingly, the detector responses are recorded separately for different energy bins. Consequently, an H-CT system can create a set of multiple sinograms for respective energy bins.
(21) We based our simulation on an experimental H-CT system that is in use in our laboratory. It is the high-energy spectral x-ray computed tomography system described in Edward S. Jimenez et al., “Leveraging multi-channel x-ray detector technology to improve quality metrics for industrial and security applications,” Radiation Detectors in Medicine, Industry, and National Security XVIII, volume 10393, International Society for Optics and Photonics (2017), page 103930G, the entirety of which is hereby incorporated herein by reference.
(22) Our laboratory system includes a Comet x-ray source and a detector composed of five customized Multix ME100 modules. The source is a bremsstrahlung source that emits a distribution of energies with a maximum beam energy of 450 keV. This relatively high energy is desirable because it increases penetration and because it reduces artifacts, relative to more conventional lower-energy Bremsstrahlung sources. The source has a high flux capability, which enhances the signal-to-noise ratio.
(23) The detector is a linear array of 640 pixels calibrated for 300 keV across 128 energy channels. The detector produces energy-resolved data, so that when an x-ray photon strikes a pixel of the detector, the photon is binned into one of the 128 channels according to its energy. The attenuation data is accordingly spatially resolved in each of the 128 energy channels. The source-to-detector distance is approximately 2.05 meters, and the object-to-detector distance is variable.
(24) The system has a 500-mm field of view (FOV) and 640×640 voxel slices with a 0.8-mm pixel pitch.
(25) In traditional systems, the individual photon energies cannot be determined or resolved. Hence, each voxel in a reconstructed scan has associated with it only a single photon count or other radiographic exposure value that is cumulative across the energy spectrum. Another drawback in traditional systems is that artifacts are more prevalent because of beam hardening by high-density materials, among other factors. Using a multichannel CT system helps to overcome these problems and, in addition, it provides spectral data, not otherwise obtainable, that can be used for material identification.
(26) The simulated tool that we used was the Particle and Heavy Ion Transport code System (PHITS), a Monte Carlo radiation and electron particle simulator developed by the Japanese Atomic Energy Agency (JAEA).
(27) The numerical simulation was able to emulate non-idealities such as noise effects and bremsstrahlung-induced streaking artifacts. This gave us greater confidence that its predictions would apply accurately to real-life implementations.
(28) The imaging system. A digital imaging system can be modeled as a mapping from an object to an image, in which the object is conceptualized as a function of continuous variables and the image is a set of discrete measurements. In these terms, the true H-CT system (in other words, the real-life system that is approximated through modeling) can be represented by the continuous-to-discrete operator H in the following expression:
{right arrow over (g)}=Hf+{right arrow over (n)}.
(29) In the above expression, f is the continuous object, {right arrow over (g)} is the discretized image containing pixel information, and {right arrow over (n)} is the system noise.
(30) If the image is distributed across a linear array of pixels and the object is discretized into an array of n voxels that is flattened into a linear sequence, we can approximate the linear H-CT system in terms of an m×n discrete-to-discrete linear imaging system operator H according to the following expression:
{right arrow over (g)}=H{right arrow over (f)}+{right arrow over (n)}.
(31) In the above expression, {right arrow over (f)} is the discretized n×1 approximation of the object, and {right arrow over (g)} and {right arrow over (n)} have dimensions m×1. The dimension m is the number of pixels, multiplied by the total number of projections.
(32) The system operator. In order for the approximated system operator H to map from a discrete object space to a discrete image space, the continuous FOV of the instrument must be discretized into a set of n points. To quantify the point response function for each point in the object space, we represented each point by one of n identical cylindrical absorbers, referred to below simply as “cylinders”. The cylinder radius was approximately equal to the system's pixel pitch. An example of how cylinders 300 can be distributed in the FOV is provided in
(33) As those skilled in the art will understand, modeling an imaging system with a two-dimensional detector array would call for three-dimensional absorbers such as x-ray absorbing spheres. However, the detector array in the model described here is a linear array. For that purpose, the dimensionality of the absorber can be reduced to that of a cylinder. Similarly, the projection beam of x-rays that we envisage in example implementations with a two-dimensional detector array is a cone beam. But for purposes of the modeling discussed here, the dimensionality is reduced to that of a fan beam.
(34) As noted above, the encoding operator H is an m×n matrix, in which m is the total number of pixels in the detector multiplied by the number of projections. The dimension n is the number of cylinders in the FOV. Within the FOV, the cylinders are physically distributed in a two-dimensional array. To create the encoding operator, however, the cylinders are serially numbered from 1 to n. Each column of the system matrix corresponds to a flattened sinogram of a respective one of the cylinders in the FOV.
(35) For practical applications, a massive volume of data would have to be encoded in the system operator, making data storage a significant challenge. By way of illustration, a current state-of-the-art single-channel CT system with a large detector array could generate as much as 6 TB of data, or even more, per scan. For such a system, the storage requirement for a full three-dimensional calibration at high resolution could reach several yottabytes per energy channel.
(36) To solve the problem of prohibitive storage requirements, we developed a compression method that reduces the size of the model to the scale of hundreds of GB.
(37) To ensure the accuracy of our calibration measurements, we took projection data based on cylinders of practical size (generally, radii of one millimeter or more), and applied an extrapolation technique to estimate projection data for (imputed) concentrically located cylinders with submillimeter radii.
(38) Because of the time required for each scan and the large number of cylinders, it would be very time-consuming to explicitly measure projection information from the complete set of sources in the FOV. To reduce the time required while still maintaining a sufficiently fine sampling of the FOV, we projected a subsampling of the cylinders and applied an interpolation method to calculate projection data for imputed cylinders at additional spatial positions. (By “imputed cylinder” we mean a cylinder that is mathematically represented in our computational model irrespective of whether it has a counterpart in a physical implementation of the modeled system.)
(39) We found that our compression, extrapolation, and interpolation methods were helpful in creating a system operator that could be used in tractable computations. In particular, we found that we could manage H-CT data accurately and efficiently by compressing the system operator for storage, and then using analytic methods to decompress it for application to an object vector {right arrow over (f)}.
(40) It should be understood that in the work described here, the system operator H was generated from simulated data, and not from experimental measurements. Thus, the projection data were not taken on a physical implementation of the H-CT system. As mentioned above, rather, the projection data were produced by PHITS, a Monte Carlo radiation and electron particle simulator. Below, we refer to the data produced by the Monte Carlo code as “synthetic data”.
(41) The Monte Carlo simulation had some realistic aspects; for example, it included the effects of Poisson noise and downscatter noise. Poisson noise is statistical noise due to the discrete nature of detected photons. Downscatter noise occurs when inelastic scattering (due to Compton scattering and the photoelectric effect) causes x-ray photons to lose energy before they are counted.
(42) However, the simulation did not include the effects of other types of detector noise, such as electronic readout noise, pulse pileup (in which photon coincidences at the detector are counted as a single, higher-energy photon), and corrupted photon counts due to detector thermal fluctuations. Defective pixels in the detector can also contribute noise, but this was also omitted from the simulation.
(43) The simulation also assumed ideal geometry, i.e., perfect alignment of the projection system. However, it should be noted that misalignments in a physical projection system are measurable and can be included in a simulation, if desired.
(44) We envisage practical applications in which the point-by-point scans of sources in the FOV are used to construct the system operator of a physically implemented H-CT system. Knowledge of the system operator is then used for purposes of calibration and image reconstruction. The point source to be scanned is a millimeter-scale spherical bead of a suitable x-ray absorber material such as copper, tungsten, or lead. The absorber material is chosen to be neither completely transparent nor completely opaque in the x-ray energy range of interest, given the absorber thickness.
(45) A precision positioner can be used for placement of the point source at designated grid positions within the FOV for each scan. If necessary, the point source can be supported using materials and thicknesses that are effectively x-ray transparent at the energies of interest, such as thin polymeric materials. It is also worthy of note that data compression and decompression can possibly mitigate artifacts added to the scan by positioning hardware. For example, it could be possible to model such artifacts with one or more extra data-compression parameters that are excluded in the subsequent data decompression.
(46) In practice, it may be desirable to recalibrate a real-life H-CT system by repeating the point-by-point scan at suitable intervals such as six-month intervals.
(47) Compression of the system operator. Each projection is represented by an n×1 vector, where n is the number of pixels in the detector. To compress each projection, each projection is fitted to a basis function so that it can be represented using only the small set of parameters that characterize the basis function.
(48) Known optimization methods can be used to find the best fit of the projection data. For example, we used the Nelder-Mead method, which is a well-known numerical method that seeks the maximum or minimum of an objective function in a multidimensional space by direct search.
(49) The Nelder-Mead method was chosen because it can find a best fit even when the derivative does not exist at every point. Also, because the Nelder-Mead method works well with noisy data, it was especially useful for treating the simulated data produced by PHITS and would likewise be useful for treating real-world H-CT data.
(50) Although other basis functions may be suitable, our currently preferred basis function is an ellipse spline given by the following expressions:
(51)
(52) As those skilled in the art will understand from the above expressions, our basis function is defined piecewise by two ellipses (i.e., a “left-hand” ellipse L and a “right-hand” ellipse R) and the four parameters a.sub.L, a.sub.R, b, and x.sub.0.
(53) The parameters parameters a.sub.L and a.sub.R may more generally be viewed as special cases of shape parameters. Likewise, the parameter b may be more generally viewed as a special case of a scale parameter, and the parameter x.sub.0 may be more generally viewed as a special case of a translational parameter. Variations of our approach in which other choices for parameters of these types are used will be apparent to those skilled in the art and should be understood as falling within scope of the present invention.
(54) More generally, the parameters for the ellipse spline are indexed as a.sub.L.sup.μκ, a.sub.R.sup.μκ, b.sup.μκ, and x.sub.0.sup.μκ, where μ and κ are the respective indices for the projection (μ=1, . . . , m) and for the cylinder in object space (κ=1, . . . , k). The parameters are stored in an m×n×k matrix, where n is the total number of parameters required for each spline (n=4 in the current example).
(55) In performing the Nelder-Mead optimization, the parameter values are subject to the following constraints: (1) the a.sub.L.sup.μκ and a.sub.R.sup.μκ must be positive and they must be contained in the pixel domain as defined by the pixels on the detector; and (2) the parameters x.sub.0.sup.μκ must be contained in the pixel domain.
(56)
(57) To fully calibrate the H-CT system, the point-response function must be measured at the system's resolution at every location in the FOV. As a practical matter, we approximate the point-response functions over a continuous FOV by sampling, in each of a plurality of discrete voxels, a cylinder whose radius is approximately equal to the system's resolution. This problem presents two challenges: (1) measuring the point-response function at submillimeter resolution, and (2) sampling every point of interest in the FOV. A parameter-extrapolation method, described below, addresses the first of these problems. A system-operator interpolation method, also described below, addresses the second of these problems. are presented below.
(58) Extrapolation of parameters. CT scans to measure the point response functions of objects at or below the system resolution (generally, submillimeter objects) are subject to inaccuracy due to insufficient attenuation. Instead of attempting these measurements directly, we estimated the projection data of submillimeter cylinders using an extrapolation technique. That is, by extrapolating projections of cylinders of larger radii centered on the same point in the FOV, we believe that we can accurately approximate the projection of a cylinder of smaller radius. In carrying out this technique, we found it unnecessary to extrapolate each full projection. Instead, we extrapolated the parameters a.sub.L, a.sub.R, b, and x.sub.0.
(59) The extrapolation method is illustrated in
(60) Procedure A i. Place a single cylinder of radius r.sub.k in the FOV centered at p.sub.ij. ii. Take a single projection of the cylinder (view 505). iii. Use the Nelder-Mead method to fit an ellipse spline to the projection. Save the parameters a.sub.Lk, a.sub.Rk, b.sub.k, x.sub.0k that describe the projection. (View 510.)
(61) Procedure B (This procedure is predicated on linear variation of the a.sub.L, a.sub.R, and b parameters with radius, as seen in
(62) Interpolation of the system operator. Because of the great volume of data, even a single projection is time-consuming to measure. For example, it took up to thirty minutes per projection to acquire H-CT data in our trial runs. As a consequence, it is advantageous to subsample rather than to measure the point response at every voxel in the FOV, and to use an interpolation method to construct the system operator.
(63) We found it unnecessary to interpolate full projections. Instead, it was sufficient to interpolate the parameters (e.g., the four-tuples discussed above) that characterize the fitted projections. Accordingly, the initial data for the interpolation method consisted of the extrapolated parameters discussed above.
(64) The interpolation method is illustrated in
(65) The interpolation method was performed with the following steps: 1. Define (view 800) an m×m grid of point locations in the FOV, where each location represents the center of a cylinder C.sub.ij, i,j=1, 2, . . . , m, m≤n. 2. For each cylinder C.sub.ij, perform the following: (a) Determine the number numProj of projections needed for full Nyquist sampling. A full scan of cylinder C.sub.ij would consist of numProj projections, each taken at a respective projection angle. As will be seen below, the projections in this set are imputed by interpolating known projection parameters of certain clusters of other cylinders in the m×m grid. (b) Compute (view 805) a set of locations p.sub.k, k=1, . . . , numProj, in the FOV. Each of these locations corresponds to a respective projection angle θ.sub.k. An imputed cylinder at each location p.sub.k is the image of cylinder C.sub.ij under rotation of the FOV by angle θ.sub.k. The imputed projections (as detailed below) of all of these imputed cylinders (plus the original projection of cylinder C.sub.ij) constitute the full scan, i.e., the full set of projections, of cylinder C.sub.ij. (c) Perform (view 810) the following for each projection location p.sub.k: i. Identify the four nearest-neighboring points to projection location p.sub.k in the FOV. ii. From the respective parameters values in P.sub.n×n for the four neighboring points, perform a bilinear interpolation to estimate the four parameters a.sub.L, a.sub.R, b, and x.sub.0 for the k-th projection of the ij-th cylinder. 3. Store the interpolated parameter values in a numProj×4×m.sup.2 matrix {circumflex over (P)}.sub.m×m. It will be understood that m.sup.2 corresponds to the number of projected cylinders.
(66) It is noteworthy that the interpolation method described above rests on an assumption that the behavior of the projection system can be treated as linear. This assumption is most valid, hence the interpolations are most accurate, for a multispectral system in which the projection data are collected for individual, spectrally limited, energy bins.
(67) Decompression of the System Operator. In our trials, which are discussed below, we applied the system matrix H to a mathematical representation (as a vector by {right arrow over (f)}) of an object in the FOV in order to generate an image vector {right arrow over (g)}, representing a sinogram of the object. We then used the FDK algorithm to reconstruct the object from the sinogram. FDK is a well-known filtered back-projection algorithm that was developed for image reconstruction in cone-beam projection systems. Importantly, image reconstruction by FDK does not require a priori knowledge of the system operator. A faithful reconstruction obtained without a priori knowledge would confirm the validity of our approach to system characterization.
(68) On the other hand, a priori knowledge of the system operator can be of great advantage when it is applied for image reconstruction using iterative reconstruction techniques. In one important application of our approach, a system matrix is generated from calibration measurements, parametrized, and stored in compressed form. For image reconstruction, the system matrix is decompressed and its data are used in an iterative procedure for reconstructing an image from a sinogram.
(69) As explained above, the extrapolated and interpolated parameters describing the point-spread functions are stored in the matrix {circumflex over (P)}.sub.m×m. These data need to be decompressed in order to recover the system matrix H for use, e.g., in producing the image vector {right arrow over (g)} for evaluation.
(70) As will be understood from the discussion above, each column of H corresponds to the stacked projections of one respective cylinder.
(71) The depth (in rows) of one stacked projection is the number n of pixels in the detector. The i′th row within the j′th one of these stacks (i=1, . . . , n) contains the i′th pixel value in the j′th projection of each of the cylinders in sequence. Hence (as will be seen below), the scalar product of such a row with the object vector {right arrow over (f)} gives the i′th pixel value in the j′th projection of the object.
(72) As explained above, each projection, hence each stack of n rows of H, is associated with a respective 4-tuple of parameters a.sub.L, a.sub.R, b, x.sub.0. These parameters are invoked in order to perform the decompression and thereby recover the pixel values in the projection of each of the cylinders.
(73) Recall that the process of generating the image vector {right arrow over (g)} is represented by
{right arrow over (g)}=H{right arrow over (f)},
(the additive noise factor {right arrow over (n)} is suppressed here) where n is the number of pixels in the detector, m is the number of pixels times the total number of projections, {right arrow over (f)} is the discretized n×1 object representation, {right arrow over (g)} and {right arrow over (n)} have dimensions m×1, and H has dimensions m×n.
(74) Therefore, to decompress H, the stack of n rows of H corresponding to the i′th projection (for all cylinders) is decompressed and multiplied by {right arrow over (f)} to produce a stack of n entries of {right arrow over (g)} corresponding to the i′th projection and by the same token, the i′th line of the resulting sinogram. To construct the entire sinogram, all rows of H are decompressed, taking them one stack of n rows at a time.
Example 1: Ideal H-CT Data
(75) In a set of numerical trials, we constructed a system matrix H from ideal, analytically generated H-CT data, and we used it to create a sinogram of a (virtual) test object. The sinogram was reconstructed using the FDK algorithm with knowledge of the projection system geometry, but without a priori knowledge of the system matrix or of the object in the FOV. These trials were carried out to test our compression, extrapolation, interpolation, and decompression methods, with the objective of verifying that we could create an ideal system operator that would emulate a real-world system operator.
(76) Results of the first trial are shown in
(77) The parameters in P.sub.10×10 describing all projections of a 10×10 grid of cylinders with a radius of 0.8 cm were then extrapolated from the parameters corresponding to the cylinders of larger radii. Next, the parameters in {circumflex over (P)}.sub.20/20 for all projections of a 20×20 grid of cylinders with a radius 0.8 cm were interpolated from the extrapolated parameters in P.sub.10×10. Finally, the interpolated parameters in {circumflex over (P)}.sub.20×20 were decompressed and applied to a random 400×1 object vector {right arrow over (f)} to produce the image, given as a sinogram.
(78)
(79)
(80) To test the validity of the results, a true system operator for the ideal system was created by constructing a full H matrix from direct measurements of all projections for a 20×20 grid of cylinders with radius 0.8 cm; no compression, extrapolation, or interpolation was applied to the true system operator. The error was calculated, pixel-by-pixel, by squaring the difference in pixel value between the true system-operator sinogram and the parameterized system operator sinogram. A similar error calculation was performed for the reconstructed images. The mean squared error is shown in Table 1, in units that are a measure of pixel intensity.
(81) In the second trial, we created two system operators, H and {tilde over (H)}. To create H, the parameters in P.sub.128×128 describing a single projection for each cylinder in a 128×128 grid of cylinders of radius 0.08 cm were extrapolated from two 128×128 grids of cylinders with radii 0.7 cm and 0.5 cm, respectively. The parameters in {circumflex over (P)}.sub.128×128 for all projections of a 128×128 grid of cylinders with radius 0.08 cm were interpolated from the extrapolated parameters in P.sub.128×128.
(82) To create {tilde over (H)}, the parameters in P.sub.64×64 describing a single projection for each cylinder in a 64×64 grid of cylinders of radius 0.08 cm were extrapolated from two 64×64 grids of cylinders with radii 0.7 cm and 0.5 cm, respectively. The parameters {circumflex over (P)}.sub.128×128 for all projections of a 128×128 grid of cylinders with radius 0.08 cm were interpolated from the extrapolated parameters in P.sub.64×64.
(83) For both H and {tilde over (H)}, the interpolated parameters in {tilde over (P)}.sub.128×128 and in {tilde over ({circumflex over (P)})}.sub.128×128, respectively, were decompressed and applied to {right arrow over (f)}, a flattened Shepp-Logan phantom given as a 16384×1 vector, shown in
(84)
(85)
(86) TABLE-US-00001 TABLE 1 Error, 20 × 20 Sampling, Ideal Data Image Mean Squared Error Std. Deviation Sinograms 5.99 × 10.sup.−5 3.40 × 10.sup.−4 Reconstructions 2.98 × 10.sup.−8 1.22 × 10.sup.−7
(87) TABLE-US-00002 TABLE 2 Error, 128 × 128 Sampling, Ideal Data Image Mean Squared Error Std. Deviation Sinograms 1.34 × 10.sup.−8 2.20 × 10.sup.−7 Reconstructions 4.55 × 10.sup.−12 1.25 × 10.sup.−11
Example 2: Simulated H-CT Data
(88) To more closely emulate a real-world system, we performed a trial using H-CT data generated with the PHITS simulation tool, which included noise. To match our laboratory H-CT system, which has a pixel resolution of 0.08 cm, the system operators created with PHITS data also used a sampling of cylinders with radius 0.08 cm.
(89) The trial was performed with the same radii and number of cylinders as the second trial described above in Example 1. Two PHITS simulations were run to collect the data used in the extrapolation. For each simulation, a 128×128 grid of copper cylinders was distributed in the FOV, and one projection was taken for each cylinder. The cylinders in the first simulation had radii of 0.7 cm, and the cylinders in the second simulation had radii of 0.5 cm. The PHITS data was used to extrapolate the parameters in P.sub.128×128 describing a 128×128 grid of cylinders with radii 0.08 cm.
(90) Due to the noisy nature of the PHITS data, additional pre-processing was required before interpolating the parameters. Outliers in P.sub.128×128 were detected and replaced with a cubic spline interpolation of neighboring points.
(91) As in the second trial described above in Example 1, two system operators were created based on the PHITS data: “Ideal” system operator H and “interpolated” system operator {tilde over (H)}. H was constructed by interpolating all projections of a 128×128 grid of cylinders of radius 0.08 cm based on the extrapolated parameters in P.sub.128×128. To create {tilde over (H)}, a 64×64 subset of P.sub.128×128 was taken to create a 64×64 set of extrapolated parameters, denominated P.sub.64×64. We constructed {tilde over (H)} constructed by interpolating all projections of a 128×128 grid of cylinders of radius 0.08 cm based on the extrapolated parameters in P.sub.64×64.
(92) As in the trial described above in Example 1, a flattened Shepp-Logan phantom, given as a 16384×1 vector, was chosen as {right arrow over (f)} and multiplied by H and by {tilde over (H)}, respectively.
(93)
(94)
(95) The mean squared error for the difference between the reconstructions based, respectively, on the ideal and interpolated system operators is shown in Table 3, in units that are a measure of pixel intensity. The values listed in Table 3 are scaled by attenuation value, and hence are not directly comparable with the values listed in the preceding tables.
(96) TABLE-US-00003 TABLE 3 Error, 128 × 128 Sampling, Simulated Data Image Mean Squared Error Std. Deviation Sinograms 4.74 10.7 Reconstructions 1.20 × 10.sup.−3 4.40 × 10.sup.−3
Example 3: Image Reconstruction Using a Priori System Information
(97) We performed trials to assess whether our compressed system operator could be used as a surrogate representation of the imaging system. Our measure of success was whether a 128×128 voxel Shepp-Logan phantom could be accurately reconstructed in a reasonable amount of time from a numerically generated sinogram {right arrow over (g)}, using an iterative algorithm in which the compressed system matrix provides the a priori system information.
(98) The iterative algorithm that we chose was the Maximum-Likelihood Expectation-Maximization (MLEM) reconstruction algorithm. The MLEM algorithm is given by:
(99)
where n is the iteration number, {circumflex over (f)}.sub.j.sup.n is the value of the reconstructed image at pixel j for the n′th iteration, i is the pixel number in the sinogram {right arrow over (g)}, and H.sub.ij is the system matrix entry at row i, column j.
(100) To apply the above equation, it is necessary to decompress the compressed system operator each time one of its entries needs to be extracted. Although the decompression can be computationally intensive, it is not difficult to perform efficiently on a massively parallel computing environment such as a graphics processor. Thus, for example, our implementation was developed in CUDA and performed on a Nvidia Titan V graphics processing unit (GPU). CUDA is a Nvidia programming language that allows for computation on Nvidia brand GPUs.
(101)
(102) It is evident from the figure that our compressed system operator is useful for image reconstruction. We believe it also has potential for other applications such as forward projection to simulate synthetic images that could be produced by the modeled imaging system.