METHOD AND APPARATUS FOR PROTON COMPUTERISED TOMOGRAPHY

20170221230 · 2017-08-03

    Inventors

    Cpc classification

    International classification

    Abstract

    A method of reconstructing a 3-dimensional image in a proton transmission computerized tomography (CT) apparatus is disclosed. The method comprises the creation of a reconstruction matrix. The matrix is created by directing a plurality of particles to traverse the object; and for each particle, measuring the trajectory and energy of each particle before and after it has traversed the object; for each particle, calculating the water-equivalent path length within the object; and for each particle, calculating the positions at which it entered and exited the object; and adding the water-equivalent path length, entry and exit positions to the reconstruction matrix. This procedure is repeated from a plurality of angular positions surrounding an object to be imaged. Then, a spatially varying 2-dimensional filter function is applied to the reconstruction matrix. Subsequently, a correction factor is applied to the filtered reconstruction matrix to at least partially correct for the finite extent of the matrix.

    Claims

    1. A method of 3-dimensional image reconstruction in computerised tomography, the method comprising: a. creating an image reconstruction matrix by: from a plurality of scan angles surrounding an object to be imaged: i. directing a plurality of particles to traverse the object; and ii. measuring a trajectory and energy of each particle before and after each particle it has traversed the object; iii. for each particle, calculating an equivalent path length (EPL) within the object; and iv. calculating an entry position at which each particle entered the object and an exit position at which each particle exited the object; and v. adding the EPL to the entry and exit positions of each particle of the image reconstruction matrix or a subset of elements thereof, b. applying a spatially varying 2-dimensional filter function to the image reconstruction matrix to generate a filtered reconstruction matrix; and c. applying a correction factor to the filtered reconstruction matrix to at least partially correct for a finite extent of the matrix.

    2. The method of claim 1, wherein the particles comprise protons, other hadrons or ions.

    3. The method of claim 1, wherein the particles comprise protons.

    4. The method of claim 1, wherein the EPL is an EPL in a reference material.

    5. The method of claim 4, wherein the EPL is a water-equivalent path length.

    6. The method of claim 1, wherein the trajectories of individual particles before traversal of the object are recorded by a multiplicity of position-sensitive detectors that are encountered by the particles before they encounter the object.

    7. The method of claim 1, wherein the trajectories of individual particles after traversal of the object are recorded by a multiplicity of position-sensitive detectors that are encountered by the particles after they have encountered the object.

    8. The method of claim 1, wherein each measured particle trajectory is traced back to create the image reconstruction matrix, and there is deposited at each location within this image reconstruction matrix corresponding values of the integral: b ( x , y ) = - R R .Math. 0 π .Math. dtd .Math. .Math. θ .Math. .Math. p ( t , θ ) .Math. δ ( x .Math. .Math. cos .Math. .Math. θ + y .Math. .Math. sin .Math. .Math. θ - t ) = 0 π .Math. d .Math. .Math. θ .Math. .Math. p ( t , θ ) .Math. xcos .Math. .Math. θ + ysin .Math. .Math. θ = t .

    9. The method of claim 8, wherein creating the image reconstrution matrix is completed prior to any filtration being applied to its contents.

    10. The method of claim 1, wherein the image reconstruction matrix is subject to a band-limited convolution kernel in Which a sampling pitch of the matrix is imposed on the convolution kernel in a frequency domain.

    11. The method of claim 10, wherein an additional smoothing filter is applied by sampling an appropriate function in the frequency domain and multiplying the convolution kernel.

    12. The method of claim 11, wherein the additional smoothing filter is a Shepp-Logan or a Gaussian filter.

    13. The method of claim 1, wherein a back-projected image matrix is given by: b ( i , j , k ) = π L .Math. .Math. m = 1 L .Math. .Math. n = 1 K .Math. λ n , m ( i , j , k ) .Math. WEPL n , m .Math. n = 1 K .Math. λ n , m ( i , j , k ) where λ.sub.n,m(i, j, k) is a path length of an n.sup.th proton in an m.sup.th projection through a voxel (i, j, k).

    14. The method of claim 13, wherein step b further comprises convolving the back-projected image matrix with a 2-dimensional filter function.

    15. The method of claim 14, wherein the 2-dimensional filter function has the form: k ( r ) = 1 4 .Math. π 2 .Math. r 3 [ ( π .Math. .Math. r τ ) 2 .Math. J 1 ( π .Math. .Math. r τ ) - Φ ( π .Math. .Math. r τ ) ] .

    16. The method of claim 1, wherein the correction factor is Δ = ( τ 2 .Math. .Math. pixels Ω f .Math. f ( i , j ) ) .Math. ( τ 2 .Math. .Math. pixels .Math. Ω b .Math. k ( N / 2 - i , N / 2 - j ) τ .Math. i 2 + j 2 ) .

    17. The method of claim 1, wherein a further correction factor is applied to further reduce residual errors in the filtered reconstruction matrix to stop powers that arise from trajectories of particles exiting the object that are not parallel to a collimation of an axis of an incident particle beam.

    18. A computerized tomography apparatus, comprising; a source of particles; a first multiplicity of position-sensitive detectors configured to detect a position and energy of particles prior to the particles traversing an object; a second multiplicity of position-sensitive detectors configured to detect the position and energy of the particles prior to the particles traversing the object; and a computer system configured to reconstruct an image of the object by analysis of output of the first and second multiplicity of positon-sensitive detectors using the method of claim 1.

    19-20. (canceled)

    Description

    [0043] An embodiment of the invention will now be described in detail, by way of example, and with reference to the accompanying drawings, in which:

    [0044] FIG. 1 shows an arrangement of a typical proton computerised tomography system;

    [0045] FIG. 2 is a diagrammatic illustration of the CT process;

    [0046] FIG. 3 shows alternative approaches to the production of multiple samples during the CT process.

    [0047] FIG. 4 is a diagrammatic illustration of the multiple samples during the CT process;

    [0048] FIG. 5 is a comparison between FBP and BPF forms of CT;

    [0049] FIG. 6 is a diagram showing a proton beam passing through an object to be imaged;

    [0050] FIG. 7 shows a hypothetical path of a proton through position-sensitive detectors in an embodiment of the invention during imaging of a person;

    [0051] FIG. 8 is a flow diagram that describes the steps carried out in a process embodying the invention;

    [0052] FIG. 9 illustrates the points of interaction between a proton, position-sensitive detectors, and a person being imaged in an embodiment of the invention;

    [0053] FIG. 10 illustrates a possible path of a proton and corresponding path models during imaging of a person in an embodiment of the invention; and

    [0054] FIG. 11 illustrates a path of a proton through an object, and the model of the path using a spline.

    INTRODUCTION TO COMPUTERISED TOMOGRAPHY

    [0055] FIG. 1 shows the arrangement of a typical proton CT system. A beam of protons 10 passes through one or more position-sensitive detectors 12 and from there to impinge upon an object or patient 14 to be imaged. If the energy of the protons is high enough, they will most likely pass through the object or patient 14 and emerge from the far side of the object or patient with some residual energy, E.sub.R. The protons then pass through one or more position-sensitive detectors 16. The residual energy E.sub.R of the protons is then measured by an energy-sensitive detector 18. Data is collected from the detectors 12, 16, 18 and analysed by an associated computer system 20. The positions of individual protons as well as their residual energy, E.sub.R, are recorded for a unique relative position of the patient relative to the system.

    [0056] Either the patient or the instrument is rotated by a small scan angle, typically 1° -2° , about a central y-axis. The process is repeated over many scan angles.

    [0057] If p is the transmittance of the object, then p is given by the line integral:

    [00005] p = L .Math. f ( x , y ) .Math. du , .Math.

    where ƒ(x, y) is some function of the object at the point (x, y) and L is the line along which the beam travels. CT scanners obtain p for various lines L, and use this information to compute an approximation to ƒ(x, y) throughout the object. The principle is illustrated in FIG. 2, where an object 22 is illuminated by a beam 24 to produce a projection image 28.

    [0058] In the two-dimensional case, the line L can be represented by the parameters r and θ, where θ measures the angle of the line from the vertical, and r measures the distance of the line from the origin of the (x, y) plane. The above formula is used to define a transform which maps a function ƒ(x, y) to a function p(r, θ), where p(r, θ) is the line integral of ƒ(x, y) over the line defined by r and θ. This transform is known as the Radon transform and denoted by custom-character. The basic challenge of tomography is to reconstruct an image from its Radon transform. Fortunately, the Radon transform, when properly defined, has a well-defined inverse. In order to invert the transform, projection data spanning 180° is needed.

    [0059] The conventional approach to CT reconstruction is the Filtered Back Projection (FBP) method. The concept is explained in FIG. 3 (i), where the forward projected beam 24 and attenuated beam 26 of the object 22 is recorded as a projection image 28. FIGS. 3 (ii) and (iii) show how other projection angles are obtained by either rotating the object with a fixed source and detector (FIG. 3 (ii)), or rotating the source and detector in tandem with a stationary object (FIG. 3 (iii)).

    [0060] An arrangement that uses a finite number of (in this case, 8) sets of individual projection data from different scan angles then is as shown in FIG. 4. The individual projection images 28 are shown projected back through the image to obtain an estimate of the internal structures of the original object 22. The individual projections will interact constructively in regions that correspond to features in the original object 22.

    [0061] A problem that arises from the arrangement of FIG. 4 is that blurring and star-like artifacts occur in other parts of the reconstructed image. A high-pass or ramp filter is used to eliminate this blurring. The combination of filtering and back-projection provides this method with its customary name: “Filtered Back-Projection” (FBP).

    [0062] There are numerous variations on the basic FBP approach. Since any practical input data will contain noise, and a ramp filter will accentuate this noise, some form of low-pass filtering is introduced to reduce the noise. There is a trade-off between the ramp filter characteristics and the filtering for noise suppression. The mean-squared optimal solution to this problem is to employ a Wiener filter.

    [0063] Hence, the essential steps in producing a CT image using FBP are: [0064] 1. Record with uniform sampling multiple individual projections through a 2-dimensional slice of an object to produce a 1-dimensional array. [0065] 2. Convolve this array of data with a 1-dimensional deblurring kernel (or the equivalent filtering in the spatial-frequency domain). [0066] 3. Back-project the convolved ray-projections through a 2-dimensional reconstruction matrix. [0067] 4. Repeat over many scan angles.

    [0068] An alternative to FBP is known in which the order of some operations is reversed—that is back-projection is performed before filtering—this can be termed as Back-Projection Filtered (BPF), an example being disclosed in S. Suzuki and S. Yamaguchi, Comparison between an image reconstruction method of filtering backprojection and the filtered backprojection method, Applied Optics 27(14):2867:2870 (1988). The two approaches are compared in FIG. 5. The BPF approach has not been used in practice because of the need for a computationally more complex 2-dimensional filter rather than a 1-dimensional one, and, without the techniques presented by the present invention, has been shown to be quantitatively less accurate.

    The CT Proposal of this Embodiment

    [0069] The embodiment of this invention implements a BPF using an analytic technique in to solve the reconstruction problem of going from the Radon transform of an object (that is the measurements) to the reconstructed object (that is the 2-dimensional image slice) only using each element of the data only once.

    [0070] For parallel beam tomography the projections can be expressed as the Radon transform of the object that is to be reconstructed. The Radon transform is defined as:

    [00006] p ( t , θ ) = R ( f ) = - .Math. - .Math. .Math. f ( x , y ) .Math. δ ( x .Math. .Math. cos .Math. .Math. θ + y .Math. .Math. sin .Math. .Math. θ - t ) .Math. dxdy

    that is the line integral along a line at an angle θ from the y-axis and at a distance |t| from the origin. By rotating the coordinate system:


    t=x cos θ+y sin θ


    u=−x cos θ+y cos θ

    [0071] Then:

    [00007] p ( t , θ ) = - .Math. .Math. f ( t .Math. .Math. cos .Math. .Math. θ - u .Math. .Math. sin .Math. .Math. θ , t .Math. .Math. sin .Math. .Math. θ + u .Math. .Math. cos .Math. .Math. θ ) .Math. du .

    [0072] This approach for reconstructing the image is to take the inverse Radon transform of the projections. This involves two steps: the image is back-projected and then filtered with a 2-dimensional filter.

    [0073] The back-projection operator custom-character is defined as:

    [00008] .Math. ( p ) = 0 π .Math. .Math. p ( x .Math. .Math. cos .Math. .Math. θ + y .Math. .Math. sin .Math. .Math. θ , θ ) .Math. d .Math. θ .Math. .Math.

    [0074] This operator represents the accumulation of the projections that pass through a point (x, y).

    [0075] Hence, the essential steps in producing a CT image using BPF in this embodiment are: [0076] 1. Record (with not necessarily uniform sampling) individual projections through a 2-dimensional slice of an object to produce a 1-dimensional array. It is possible to relax this strict condition, namely that proton paths stay solely with a 2-dimensional plane. [0077] 2. Back-project the projections through a 2-dimensional reconstruction matrix. [0078] 3. Convolve (or filter) this 2-dimensional data matrix with a 2-dimensional deblurring kernel. [0079] 4. Repeat over many scan angles. The scan angles need not be uniformly distributed.

    [0080] A formal treatment of the BPF approach embodying the invention will now be presented.

    [0081] Suppose a 2-dimensional slice of a body has some property described by the function, ƒ(x, y), with a compact support, Ω.sub.f={(x, y) ∈custom-character.sup.2: x.sup.2+y.sup.2<R.sup.2}. In the case of proton CT, the appropriate property is the proton relative stopping power. A line integral through the object, at an angle θ and position t (see FIG. 6) can be written as:

    [00009] p θ = - R R .Math. - R R .Math. .Math. dx .Math. .Math. dy .Math. .Math. f ( x , y ) .Math. δ ( x .Math. .Math. cos .Math. .Math. θ + y .Math. .Math. sin .Math. .Math. θ - t ) . ( 1 )

    [0082] This line integral is a ray projection through a slice of an object being imaged, and is related to the path length in a reference material, typically the water-equivalent path length (WEPL). The presence in a body of a patient with many different tissues, each with its own density, represents a challenge to image reconstruction. The concept of WEPL gets around this problem as the water equivalent approximation is based on the fact that the stopping powers of different materials are very similar to one another. The transform from the space (x, y) to (θ, t) is the Radon transform.

    [0083] The back-projection operation is defined as:

    [00010] b ( x , y ) = - R R .Math. 0 π .Math. dt .Math. .Math. d .Math. θ .Math. .Math. p ( t , θ ) .Math. δ ( x .Math. .Math. cos .Math. .Math. θ + y .Math. .Math. sin .Math. .Math. θ - t ) = 0 π .Math. d .Math. θ .Math. .Math. p ( t , θ ) .Math. x .Math. .Math. cos .Math. .Math. θ + y .Math. .Math. sin .Math. .Math. θ = t ( 2 )

    [0084] This is equivalent to tracing back along the ray-projection path and depositing, at each location through which it passes, the value of the line integral. The resulting back-projection function, b(x, y), resembles a blurred representation of the density function, ƒ(x, y). In fact, they are related through:

    [00011] b .Math. ( x , y ) = f ( x , y ) ** .Math. 1 r , ( 3 )

    where ** denotes a 2-dimensional convolution, and r=∥(x, y)∥. Note that the function b(x, y) has non-compact support: Ω={(x, y) ∈custom-character.sup.2}. That is, it is non-zero everywhere.

    [0085] This convolution relation can be expressed as a filtering operation in 2-dimensional Fourier space. Let the Fourier conjugate variables of (x, y) be (u, v) and ρ=∥(u, V)∥. Then, it is possible to write:

    [00012] f ( x , y ) = FT - 1 .Math. { FT .Math. { b } FT .Math. { r - 1 } } = IFT .Math. { FT .Math. { b } .Math. ρ } , ( 4 )

    where FT and IFT indicates two 2-dimensional forward and inverse Fourier transforms respectively; and it is known that, in 2 dimensions:


    FT{r.sup.−1}=ρ.sup.−1

    [0086] In a practical implementation, a discrete formulation for the back-projected image, b, needs to be used. The 2-dimensional matrix for b can be calculated from a set of acquired measurements in several ways. One possible scheme will now be described.

    [0087] Imagine that L projections are acquired over a set of different scan angles:

    [00013] { φ n = π .Math. l - 1 L .Math. : .Math. .Math. l = 1 , 2 , .Math. .Math. , L } .

    At each scan angle, K ray-projections are sampled (not necessarily uniformly), with a set of locations: {t.sub.k:k=1, 2, . . . , K}. Now consider a discrete 2-dimensional back-projection matrix, b(i, j), with corresponding pixel centres {x, =(2i=N=1)τ/2:i=1, 2, . . . , N}and {y, (2j−N−1)τ/2:j=1, 2, . . . , N}with τ being the matrix pitch. This matrix could be expressed in terms of the discrete Radon transform, p.sub.i(k) as follows:

    [00014] b ( i , j ) = π L .Math. .Math. m = 1 L .Math. .Math. .Math. n = 1 K .Math. .Math. λ n , m ( i , j ) .Math. p n , m .Math. n = 1 K .Math. .Math. λ n , m ( i , j ) , ( 5 )

    where λ.sub.k(i,j) is the path length of the k.sup.th ray-projection through the pixel (i, j). The 3-dimensional version with the 2-dimensional line integral replaced with the measured WEPL, is:

    [00015] b ( i , j , k ) = π L .Math. .Math. m = 1 L .Math. .Math. n = 1 K .Math. λ n , m ( i , j , k ) .Math. WEPL n , m .Math. n = 1 K .Math. λ n , m ( i , j , k )

    [0088] This version relaxes the requirement for linear in-plane paths.

    [0089] The filtering operation of this embodiment will now be described.

    [0090] Two constraints must be imposed on the filtering operation: the sampling pitch and the finite size of the back-projection matrix. The first must be imposed on the filter in the frequency domain and then the second on the kernel in the spatial domain. The band-limit on spatial-frequencies is imposed by modifying the ramp filter representation of the kernel in frequency space:

    [00016] K ( ρ ) = ρ if .Math. .Math. ρ < 1 2 .Math. τ 0 otherwise

    [0091] The relationship between the convolution kernel and filter involves a 2-dimensional Fourier transform (in contrast with the FBP procedure, which involves a 1-dimensional Fourier transform).

    [0092] The 2D convolution kernel, k(r), is now derived as:

    [00017] k ( r ) = .Math. IFT .Math. { K ( ρ ) } = .Math. 0 1 2 .Math. π .Math. - π π .Math. ρ .Math. .Math. d .Math. .Math. ρ .Math. .Math. d .Math. .Math. φ .Math. .Math. ρ .Math. .Math. e j .Math. .Math. 2 .Math. .Math. πρ .Math. .Math. r .Math. .Math. cos ( θ - φ ) = .Math. 2 .Math. π .Math. 0 1 2 .Math. π .Math. d .Math. .Math. ρρ 2 .Math. J 0 ( 2 .Math. π .Math. .Math. r .Math. .Math. ρ ) = .Math. 1 4 .Math. π 2 .Math. r 3 [ ( π .Math. .Math. r τ ) 2 .Math. J 1 ( π .Math. .Math. r τ ) - Φ ( π .Math. .Math. r τ ) ] ( 6 )

    where

    [00018] Φ ( x ) = π .Math. .Math. x 2 [ J 1 ( x ) .Math. H 0 .Math. ( x ) - J 0 ( x ) .Math. H 1 .Math. ( x ) ]

    and J.sub.n and H.sub.n are n.sup.th order Bessel and Struve functions, respectively. Note that this kernel remains finite at zero argument since:

    [00019] k ( r = 0 ) = π 12 .Math. τ 3

    [0093] The finite size of the back-projection matrix and kernel must be considered in order to find the appropriate filter in the discrete Fourier space. The correct filter to use is a Discrete Fourier Transform (DFT) of the finite band-limited kernel derived above. This is:


    K(I, J)=DFT{k(i, j):i=−N, . . . , N−1; j=−N, . . . , N−1}

    where I and J are the discrete Fourier conjugate variables. The filtering operation is performed by multiplying the DFT of the back-projection matrix, b(i, j) by the DFT of k(i, j). That is:

    [00020] f bpf = .Math. .Math. { DFT - 1 .Math. { b ( i , j ) .Math. : .Math. .Math. i = 1 , .Math. .Math. , N ; j = 1 , .Math. .Math. , N ; with .Math. .Math. zero .Math. .Math. points .Math. .Math. to .Math. .Math. 2 .Math. N } * DFT .Math. { k ( i , j .Math. : .Math. .Math. i = - N , .Math. .Math. , N - 1 ; j = - N , .Math. .Math. , N - 1 ) } }

    [0094] The process of finite matrix correction will now be described.

    [0095] The mathematical formulation given above described the BPF process and how to implement it. However, this formulation may not allow the reconstruction of quantitatively accurate images of proton stopping power. Although ƒ(x, y) is clearly zero outside of the desired reconstruction region, the function b(x, y) is non-zero everywhere. In practice, the support for back-projection must be bounded: Ω.sub.b.sup.T={(x, y) ∈custom-character.sup.2:x.sup.2<R.sup.2, y.sup.2<R.sup.2}. The finite matrix size chosen for back-projection will result in truncation of the data.

    [0096] Previously, the only attempt to deal with this problem has been to make the size of the back-projection matrix (N×N), much larger than the desired image matrix (M×M) with, say, N=4M. Such an approach is wasteful for computer memory and increases computation, as well as only asymptotically improving accuracy.

    [0097] The effect of missing data, due to a finite back-projection matrix, closely resembles a zero-frequency offset in the reconstructed stopping-power ƒ(i, j). The effect of distant data on the reconstruction region will only manifest itself at low-spatial frequencies. A constant-shift offset correction can be arrived at in several ways.

    [0098] Consider the following: the distant values of the back-projection matrix (if far enough away from the central region) can be approximated as:

    [00021] b ( x , y ) = f ( x , y ) ** 1 r Ω f .Math. dx .Math. dy .Math. f ( x , y ) ( x - x ) 2 + ( y - y ) 2 ,

    where r>>r′. The contribution to ƒ(x, y) of the missing data is therefore approximately:

    [00022] Δ ( x , y ) = .Math. .Math. Ω b .Math. dx .Math. dy .Math. b r r ( x , y ) .Math. k ( x - x , y - y ) = .Math. ( Ω f .Math. dx .Math. dy .Math. f ( x , y ) .Math. ( .Math. Ω b .Math. dx .Math. dy .Math. k ( x - x , y - y ) r ) . ( 9 )

    [0099] To the central pixel, (N/2, N/2), this becomes:

    [00023] Δ = ( τ 2 .Math. .Math. pixels Ω f .Math. f ( i , j ) ) .Math. ( τ 2 .Math. .Math. pixels .Math. Ω b .Math. k ( N / 2 - i , N / 2 - j ) τ .Math. i 2 + j 2 ) . ( 10 )

    [0100] The first summation is the sum of all the pixel values in the reconstructed image slice (∈Ω.sub.ƒ). The second sum is a summation over all pixels outside the back-projection region (Ω.sub.b).

    [0101] This is in principle an infinite sum, but can be evaluated inexpensively to a desired accuracy. This correction, C, calculated for one pixel, is then applied to all pixels in the image as a constant offset, that is:


    ƒ.sub.Cor (i, j)=ƒ.sub.bpf (i, j)+C.   (11)

    [0102] It should be noted that the above formulation for reconstruction assumes linear in-plane ray-projections (in this case, proton paths). As such, each slice of the 3D volume should factorize into a separate 2D problems with distinct data. This is the essence of classical tomography. In reality, due to beam angular dispersion and straggling, protons will not typically remain in the same axial plane as they progress through an object. However, both the angular dispersion and straggling of protons are typically small enough to be considered a perturbation. Typical root-mean-square figures for beam divergence before and after a patient might be 5 mrad and 50 mrad respectively (Bruzzi M, et al. 2007, Prototype Tracking Studies for Proton CT, IEEE Trans. Nucl. Sci. 54, 140-145), although these values would depend on the beam nozzle design, proton energy and size of patient. We therefore propose the following adaptation to non-linear out-of-plane paths. Firstly, the back-projection is performed as expressed in equation 5 except that ray paths are back-projected along non-linear trajectories through the 3D volume and may pass through multiple axial slices. Secondly, the filtering operation, as described in equation 6 is performed slice by slice as if the protons had progressed linearly, parallel and in-plane.

    [0103] The main elements of the proton CT system embodying the invention are shown in FIG. 7. In this example, there are two sets of position sensitive detectors 30, 32 in front of a patient 22 (the “object” that is being imaged), and there are two sets of position sensitive detectors 34, 36 after the patient 22. The input vector, for an individual proton with initial energy E.sub.I, to the BPF algorithm consist of four sets of positional data obtained from the sets of detectors 30 . . . 36 {(x.sub.1, y.sub.2), . . . , (x.sub.4, y.sub.4)} and the residual energy of the proton, E.sub.R, as recorded by an energy-sensitive detector 18 in FIG. 1. The path of the proton shown in FIG. 7 is a diagrammatic example only.

    [0104] With reference to FIG. 8, the sequence of operations for the generation of a proton CT image in an embodiment of the invention will now be described.

    [0105] In Step 1, the number of rotations, N, is set and the Count initialised to zero. The values of the CT image matrix are also set to zero.

    [0106] In Step 2, data is acquired from the instrument (at the initial scan angle). This individual projection data consists of the recorded vector for each identified individual proton path and its residual energy, E.sub.R, that is:

    [00024] [ x 1 , y 1 x 2 , y 2 x 3 , y 3 x 4 , y 4 E R ]

    [0107] Proton energies are converted in Step 3 into water-equivalent path lengths (WEPL), to preserve the uniform dimensionality of all parameters. There are several ways to achieve this conversion. In this example:


    WEPL.sub.R=R.sub.H.sub.2.sub.O(E.sub.I)−R.sub.H.sub.2.sub.O(E.sub.R)

    where R.sub.H.sub.2.sub.O(100) is the range in water of a 100 MeV proton, and E.sub.1 is the initial energy of the proton.

    [0108] As a first option,


    WEPL.sub.R=−∫.sub.E.sub.I.sup.E.sub.RSP.sub.H.sub.2.sub.O(e).sup.−1de,

    where SP.sub.H.sub.2.sub.Ois the stopping power for protons in water.

    [0109] Or as a second option,


    WEPL.sub.R=ƒ(E.sub.I, E.sub.R) ,

    where ƒ(E.sub.I,R) returns the most probable value for WEPL, given the initial energy E.sub.I and estimate residual range, E.sub.R, from the energy-sensitive detector. The function ƒ(E.sub.I, R) can be calculated from Monte Carlo simulations or from calibration experiments.

    [0110] It has been found that these three different approaches should provide answers that differ by less than 1%. The vector is now:

    [00025] [ x 1 , y 1 x 2 , y 2 x 3 , y 3 x 4 , y 4 WEPL R ] .

    [0111] Step 4 is to estimate the path of each proton through the patient 22. As illustrated in FIG. 9, this is done by projecting forward and backward from the respective (x, y) locations of the appropriate detectors 30. . . 36 to determine the entry and exit voxels 40, 42 for the reconstruction. Protons experience multiple Coulomb scattering (MCS), so superior final image quality can be achieved if this is taken into account instead of assuming a straight-line path. Several methods are available to estimate the most likely path, for example: [0112] Most Likely/Probable trajectory: for each of the two lateral directions, estimate the most likely displacement and direction vector for each proton at each z location along the beam axis (R. W. Schulte, et al, A maximum likelihood proton path formalism for application in proton computed tomography, Med. Phys. 35, 4849, 2008). [0113] Most Likely/Probable position: for each of the two lateral directions, estimate the most likely displacement for each proton at each z location along the beam axis (D. Wang, T. R. Mackie and W. A. Tomé, On the use of a proton path probability map for proton computed tomography reconstruction, Med. Phys. 37, 4138, 2010). [0114] Approximate the path empirically using a smooth transition between the initial and final proton vectors e.g., cubic spline (D C Williams, The most likely path of an energetic charged particle through a uniform medium, Phys. Med. Biol. 49, 2899, 2004).

    [0115] Any of the above methods, or another, can be used to estimate the most likely path of an individual proton through reconstruction matrix. A smooth (non-linear) path 46 or a piecewise linear representation 48 of the actual path 50 can be used (illustrated in FIG. 10) in respect to any of the above methods.

    [0116] For example, an n-knot spline may be employed to model the proton path 50, the inside the object. For each proton, n knot points K1, K2, K3 are equally spaced between phantom entry and exit points 54, 56. Inside the object, back-projection is assumed linear between any two knot points, as shown in FIG. 11. Outside the object, the protons are projected back from the surface with trajectories parallel to the beam axis (at an angle φ in the x-y plane). This prevents the back-projection rays “spreading-out” as they diverge from the object. Back-projecting away from the object into empty space along the actual entry and exit trajectories can result in a small quantitative error in the reconstructed stopping powers of up to 1%.

    [0117] In Step 5, the associated WEPL.sub.R values are back-projected into the image matrix. The relevant formulation is Eqn. (5).

    [0118] Steps 2 to 6 are repeated until Count=N.

    [0119] In step 7, the filtering operation given by Eqn. (8) is performed on the completed back-propagation matrix, b(x, y).

    [0120] In step 8, the Finite Matrix Correction factor, C, (Eqn. 10) is calculated and applied in accordance with Eqn. (11).

    [0121] Finally, in step 9, the proton CT image can be displayed.

    [0122] Although embodiments of the invention have been described with reference to the use of protons, applications of the invention are not limited to tomography using protons. Alternative embodiments might use any particles that lose energy quasi-continuously as it passes through matter such as helium or heavier ions. The finite matrix correction and kernel for BPF would also be applicable to x-ray CT, single-photon emission computerised tomography and positron emission tomography.

    [0123] Throughout the description and claims of this specification, the words “comprise” and “contain” and variations of the words, for example “comprising” and “comprises”, means “including but not limited to”, and is not intended to (and does not) exclude other moieties, additives, components, integers or steps.

    [0124] Throughout the description and claims of this specification, the singular encompasses the plural unless the context otherwise requires. In particular, where the indefinite article is used, the specification is to be understood as contemplating plurality as well as singularity, unless the context requires otherwise.

    [0125] Features, integers or characteristics described in conjunction with a particular aspect, embodiment or example of the invention are to be understood to be applicable to any other aspect, embodiment or example described herein unless incompatible therewith.