RADIATION DETECTION WITH NON-PARAMETRIC DECOMPOUNDING OF PULSE PILE-UP

20220137111 · 2022-05-05

    Inventors

    Cpc classification

    International classification

    Abstract

    A method of determining a spectrum of energies of individual quanta of radiation received in a radiation detector is disclosed. Spectrum sensitive statistics are computed from a time series of digital observations from the radiation detector, defining a mapping from a density of amplitudes of the pulses to the spectrum sensitive statistics. The spectrum is determined by estimating the density of amplitudes of the pulses by applying an inversion of the mapping to the spectrum sensitive statistics. The statistics may be based on a first set of nonoverlapping time intervals of constant length L at least as long as a duration of the pulses without regard to entirety of clusters of the pulses; and a second set of nonoverlapping time intervals of constant length L1 less than L also without regard to entirety of clusters of the pulses. A method of estimating count rate is also disclosed.

    Claims

    1. A method of determining a spectrum of energies of individual quanta of radiation received in a radiation detector, the method comprising the steps of: (1) obtaining a time series of digital observations from the radiation detector comprising pulses corresponding to the detection of the individual quanta; (2) computing spectrum sensitive statistics from the detector signal, the spectrum sensitive statistics defining a mapping from a density of amplitudes of the pulses to the spectrum sensitive statistics; and (3) determining the spectrum by estimating the density of amplitudes of the pulses by applying an inversion of the mapping to the spectrum sensitive statistics.

    2. The method of claim 1 further comprising basing the spectrum sensitive statistics on a sum of the digital observations over a plurality of time intervals.

    3. The method of claim 2 further comprising defining the mapping using an approximate compound Poisson process.

    4. The method of claim 3, further comprising augmenting the approximate compound Poisson process by a modelled noise.

    5. The method of claim 4, further comprising expressing the mapping as a relation between characteristic functions of the amplitudes, the spectrum sensitive statistics and the modelled noise.

    6. The method of claim 5, further comprising computing the characteristic functions of the spectrum sensitive statistics by applying an inverse Fourier transform to a histogram of the sum of the digital observations.

    7. The method of claim 5, further comprising computing the characteristic functions of the amplitudes with a low pass filter.

    8. The method of claim 2, further comprising selecting each of the plurality of time intervals to encompass zero or more approximately entire clusters of the pulses, and defining the plurality of time intervals to be nonoverlapping and have a constant length L.

    9. The method of claim 8, further comprising requiring a maximum value of the detector signal at a beginning and end of each time interval.

    10. The method of claim 8, further comprising defining the approximate compound Poisson process as a sum of the amplitudes of the pulses in each time interval.

    11. (canceled)

    12. (canceled)

    13. The method of claim 2, further comprising selecting the plurality of intervals to include: a first set of nonoverlapping time intervals of constant length L without regard to entirety of clusters of the pulses; and a second set of nonoverlapping time intervals of constant length L1 less than L also without regard to entirety of clusters of the pulses; wherein L is at least as long as a duration of the pulses.

    14. The method of claim 13, further comprising selecting L1 to be less than the duration of the pulses.

    15. (canceled)

    16. (canceled)

    17. The method of claim 1, further comprising using a data driven strategy selected to result in a near optimal choice for a kernel parameter which minimizes an integrated square of errors of an estimated probability density function of the energies of the individual quanta of radiation.

    18. A method of estimating count rate of individual quanta of radiation received in a radiation detector, the method comprising the steps of: (1) obtaining a time series of digital observations from the radiation detector comprising pulses corresponding to the detection of the individual quanta; (2) computing spectrum sensitive statistics from the detector signal, based on a sum of the digital observations over a plurality of time intervals, the spectrum sensitive statistics defining a mapping from a density of amplitudes of the pulses to the spectrum sensitive statistics using an approximate compound Poisson process, the plurality of time intervals including: a first set of nonoverlapping time intervals of constant length L selected without regard to entirety of clusters of the pulses; and a second set of nonoverlapping time intervals of constant length L1 less than L also selected without regard to entirety of clusters of the pulses; wherein L is at least as long as a duration of the pulses; (3) determining an estimate of a characteristic function of the approximate compound Poisson process using: ^ y = G ^ x z ^ x where G is a windowing function, {circumflex over (Ø)}.sub.X is an estimate of a characteristic function of a sum of the digital observations over each nonoverlapping time interval in the first set, {circumflex over (Ø)}.sub.Z is a characteristic function of a modelled noise process and {circumflex over (Ø)}.sub.X.sub.1 is an estimate of a characteristic function of a sum of the digital observations over each nonoverlapping time interval in the second set; (4) estimating the count rate from the estimate of the characteristic function.

    19. The method of claim 18, further comprising estimating the count rate by using an optimization routine or other means to fit a curve, estimating a DC offset of a logarithm of the estimate of the characteristic function, or fitting a curve to the logarithm of the estimate of the characteristic function.

    Description

    3 DERIVATION OF ESTIMATOR OF THE FIRST EMBODIMENT

    [0015] The general approach we take to addressing pile-up is based on the following strategy; i) obtain statistics from s(t) that are sensitive to the distribution of incident photon energies, and estimate those statistics using the observed, finite-length sampled version of s(t), ii) obtain a mapping from the density of incident photon energies to the statistical properties of the observed statistics, iii) estimate the density of the incident photon energies by inverting the mapping. Section 3.1 describes our choice of statistics. Section 3.2 argues that these statistics (approximately) have the same distribution as a compound Poisson process. Section 3.3 introduces a decompounding technique for recovering the spectrum from these statistics. It is based on the decompounding algorithm in [18] but further developed to obtain near optimal performance in terms of the integrated square of error. Our approach to the pile-up problem follows the general theme of finding some statistics of s(t) that are sensitive to the underlying spectrum, estimating these statistics from a finite-time sampled version of s(t), then inverting the map that describes the statistical properties of these statistics given the underlying spectrum, thereby producing an estimate of the spectrum.

    3.1 Choice of Statistic

    [0016] We wish to obtain estimates of the photon energies from the observed signal given in (2). In typical modern spectroscopic systems, the detector output s(t) is uniformly sampled by an ADC. Without loss of generality, we assume the raw observations available to the algorithm are {s(k): k∈custom-character.sub.≥0}. Since identification of individual pulses can be difficult, we look instead for intervals of fixed length L∈custom-character.sub.>0 containing zero or more clusters of pulses. Precisely, we define these intervals to be [T.sub.j, T.sub.j+L) where

    [00004] T 0 = inf { k : | s ( k ) | ϵ , .Math. s ( k + L ) | ϵ , k 0 } ( 7 ) T j = inf { k : .Math. s ( k ) | ϵ , .Math. s ( k + L ) | ϵ , T j - 1 + L k , k 0 } . ( 8 )

    [0017] Here, ϵ is chosen as a trade-off between errors in the energy estimate and the probability of creating an interval. The value of ϵ should be sufficiently small to ensure the error in the estimate of total photon energy arriving within each interval is acceptably low, yet sufficiently large with respect to the noise variance to ensure a large number of intervals are obtained. Although the probability of partitioning the observed data into intervals approaches zero as the count-rate goes to infinity, this approach succumbs to paralysis at higher count-rates compared to pile-up rejection strategies based on individual pulses, since multiple photons are permitted to pile-up within in each interval. Section 4.2 describes the selection of L and ϵ for real data. Each interval contains an unknown, random number of pulses and may contain zero pulses.

    [0018] We estimate the total photon energy x.sub.j in the interval [T.sub.j, T.sub.j+L) using the sampled raw observations. Since the area under each pulse is proportional to the photon energy A.sub.j defined in (1), we let

    [00005] x j = .Math. k = T j T j + L - 1 s ( k ) ( 9 )

    [0019] The number of photon arrivals, the energy of each arriving photon and the detector output noise in each interval [T.sub.j, T.sub.j+L) are assumed to be random and independent of other intervals. For pulse shapes with exponential decay, a small amount of the photon energy arriving in an interval may be recorded in the next interval. The amount of leakage is proportional to ϵ, and is negligible for sufficiently small ϵ. Consequently, the estimates x.sub.1, x.sub.2, . . . may be treated as the realization of a weakly-dependent, stationary process where each estimate is identically distributed according to the random variable X. This relationship is illustrated in FIG. 1 for the noise free case using a typical pulse shape.

    3.2 Approximation with Compound Poisson Process

    [0020] In this subsection we describe the distribution of X in terms of ƒ.sub.A(x). We will then invert this in section 3.3, to obtain an estimator for the density ƒ.sub.A(x). Using (9), (2), (1) and the fact that custom-character(t) is causa we have

    [00006] x j = .Math. { : τ < T j } .Math. k = T j T j + L - 1 a Φ ( k - τ ) + .Math. { : T j τ < T j + L } .Math. k = T j T j + L - 1 a Φ ( k - τ ) + .Math. k = T j T j + L - 1 w ( k ) ( 10 )

    [0021] As justified below, this simplifies to

    [00007] x j y j + z j ( 11 ) where y j = .Math. { : T j τ < T j + L } a ( 12 ) and z j = .Math. k = T j T j + L - 1 w ( k ) . ( 13 )

    [0022] Both y.sub.j and z.sub.j are i.i.d. sequences of random variables. We denote their distributions by Y and Z. The distribution of Z is fully determined from the distribution of w(t), which is assumed zero-mean Gaussian with known variance σ.sup.2. Moreover, Y is a compound Poisson process since the number of terms in the summation (number of photon arrivals in an interval of length L) has Poisson statistics. Equations (11)-(13) are justified as follows. The first term of (10) represents leakage from earlier intervals and is approximately zero. This is easily shown for Gaussian noise by performing a Taylor expansion about ϵ=0

    [00008] P r ( | s ( k ) | < ϵ ) = 1 2 erf ( r ( k ) + ϵ 2 σ ) - 1 2 erf ( r ( k ) - ϵ 2 σ ) ( 14 ) 0.7 9 7 8 5 ϵ σ e - r ( k ) 2 2 σ 2 + O ( ϵ 3 ) . ( 15 )

    [0023] Thus there is a finite but small probability that some energy belonging to a previous interval will be included in the current estimate. In practice, this contribution is comparable to the noise for sufficiently small E. The third term in (10) is zero since custom-character(t) is causal. The second term in (10) can be written as

    [00009] .Math. { : T j τ < T j + L } a .Math. k = T j T j + L - 1 Φ ( k - τ ) .Math. { : T j τ < T j + L } a ( 16 )

    where we assume the pulse shapes custom-character(t) are sufficiently smooth such that

    [00010] Σ k = T j T j + L - 1 Φ ( k - τ ) - Φ ( t ) d t = 1 .

    It approximates the total energy of all the photons arriving in the interval [T.sub.j, T.sub.j+L). Let ν.sub.j designate the number of photon arrivals in the interval [T.sub.j, T.sub.j+L). We assume ν.sub.j is a realization of a homogeneous Poisson process with rate parameter λ, where λ is expressed in terms of the expected number of photons per interval of length L. Henceforth we shall assume that (11) holds exactly, and write

    [00011] X = Y + Z ( 17 )

    [0024] Finally, we write x.sub.j as

    [00012] x j y j + z j ( 18 )

    where we assume Z has known variance σ.sup.2. In this subsection we model the statistic of section 3.1 using a compound Poisson process. This allows us to derive an estimator for the density ƒ.sub.A(x) in terms of observable quantities. The number of photons arriving in the interval [T.sub.j, T.sub.j+L) is a Poisson random variable which we designate ν.sub.j. The total energy in the interval Y can be modelled as a compound Poisson process i.e.,

    [00013] y j = { .Math. k = 0 v j - 1 a ( j , 1 + k ) , ν j > 0 0 ν j = 0 ( 19 ) ν j P n ( λ ) ( 20 )

    where custom-character.sub.j,1=min{custom-character: T.sub.j≤τcustom-character<T.sub.j+L} is the index of the first-photon arrival time in the interval, the arrival times are assumed ordered, and custom-character representing photon energy are independent realizations of the random variable A with density function ƒ.sub.A(x). The {ν.sub.j} form a homogeneous Poisson process with rate parameter λ. The Poisson rate λ is expressed in terms of the expected number of photons per interval of length L.

    [00014] y j .Math. k = T j T j + L r ( k ) . ( 21 )

    [0025] The relationship between realizations of Y and the sampled detector response is illustrated in FIG. 1. The observed x.sub.j can be approximated in terms of the custom-character by substituting (2) into (9),

    [00015] x j = .Math. k [ T ^ j , T ^ j + L ] ( r ( k ) + w ( k ) ) ( 22 ) x j = .Math. k = T j T j + L r ( k ) + .Math. k = T j T j + L w ( k ) ( 23 ) y j + z j ( 24 )

    where z.sub.j is the realization of the unobservable random variable Y that represents the photon energy in an interval of the discrete-time detector response,

    [00016] Y j = .Math. k [ T j , T j + L ] R k ( 25 )

    [0026] where z.sub.j is a realization of Z, an independent random variable representing errors in the sampling process and estimation of T.sub.j. We assume Z has known variance σ.sup.2. With these definitions of X and Y, the number of intervals which can be found in a finite length of detector output is a random variable N. At high count-rates this approach succumbs to paralysis, as the probability of being able to partition the observed data into intervals approaches zero. The onset of paralysis occurs at higher count-rates compared to pile-up rejection based strategies, since multiple photons are permitted to pile-up within in each interval. Assume the time-series defined in (3)-(6) has been sampled uniformly. Without loss of generality, assume unit sample intervals beginning at t.sub.0=0 i.e., t.sub.k=k, 0≤k<K. Let R be a discrete-time random process representing the sampled detector response of (1). Let Y={Y.sub.j: 0≤j<N} be a discrete-time random process whose components Y.sub.j represent the total photon energy arriving during a fixed time interval. A compound Poisson process can be used to model Y, i.e.,

    [00017] Y j = { .Math. k = 1 v j A k , ν j > 0 0 ν j = 0 ( 26 ) ν j P n ( λ ) ( 27 )

    where ν.sub.j is an independent Poisson random variable, and A.sub.k are independent identically distributed random variables with density function ƒ.sub.A(x). The {ν.sub.j} form a homogeneous Poisson process with rate parameter λ. The process Y is not directly observable. Assume the pulse shape Φ(t) has finite support. Let custom-character.sub.A(t) be the indicator function for the set A. Let the pulse length custom-character be given by custom-character=sup({t: Φ>0})−inf({t: Φ(t)>0}). Let S=R+W={s(k):0≤k<K} be a discrete-time random process representing the observed detector output given by (2). It consists of the detector response R corrupted by a noise process W. Without loss of generality, we assume unit sample intervals. From the observations S we form the process X, where

    [00018] X j = Y j + Z j ( 28 )

    and where Z.sub.j is a random variable from an independent noise process of known variance σ.sup.2. A simple model for testing theory is obtained when we let the pulse shape Φ(t)=custom-character.sub.(0,1)(t) in (1), in which case we let X.sub.j=S.sub.j, and N is simply the sample length K. Obtaining X.sub.j from S is more complicated for real data. In that case we partition the process S into non-overlapping blocks of length L, where L>custom-character. The Poisson rate λ is expressed in photons per block. The start of each block T.sub.j∈custom-character is chosen such that the total energy of any pulse is fully contained within the block in which it arrives

    [00019] T j = min { k : R k = 0 , R k + L = 0 , T j - 1 + L < k < K - L } ( 29 )

    FIG. 1 shows that

    [00020] Y j = .Math. k = T j T j + L R k .

    We let

    [0027] [00021] X j = .Math. k = T ^ j T ^ j + L S k

    [00022] X j = .Math. k = T ^ j T ^ j + L S k ( 30 )

    where {circumflex over (T)}.sub.j is an estimate of T.sub.j. Section 4.2 describes the selection of L and ϵ for real data. With this definition of X.sub.j, the number of components in Y becomes a random variable for a given sample length K. At high count-rates this approach succumbs to paralysis, as the probability of being able to create a block approaches zero. The onset of paralysis occurs at higher count-rates compared to pile-up rejection based strategies, since multiple photons are permitted to pile-up within in each block. Let Y={Y.sub.j: 0≤j<N} be a discrete-time random process whose components Y.sub.j are given by

    [00023] Y j = .Math. k = T j T j + L R k ( 31 ) T j = min k > T j - 1 + L { R k < d , R k + L < d } ( 32 )

    [0028] where L∈custom-character is a constant chosen such that h and d is a small threshold value close to zero. The random variable Y.sub.j thus represents the total photon energy arriving during a fixed time interval of length L. The value of d ensures the signal associated with photon arrivals is very small at the start and end of each interval. This is illustrated in FIG. 1. A compound Poisson process can be used to model Y, i.e.,

    [00024] Y j = { .Math. k = 1 v j A k , v j > 0 0 v j = 0 ( 33 ) v λ = { v j : 0 j < N } ( 34 ) v j Pn ( λ ) ( 35 )

    where ν.sub.λ is a homogeneous Poisson process with rate parameter λ, and A.sub.k are independent identically distributed random variables with density function ƒ.sub.A(x). Let S=R+W be a discrete-time random process representing the sampled detector output given by (2). It consists of the detector response R corrupted by a noise process W. The process Y is not directly observable. Using (2), (25) and (32), we model observations by the process X={X.sub.j: 0≤j<N}, i.e.,

    [00025] X j = .Math. k = T j T j + L S k ( 36 ) = .Math. k = T j T j + L ( R k + W k ) ( 37 ) = Y j + .Math. k = T j T j + L W k ( 38 ) = Δ Y j + Z j ( 39 ) [0029] where Z is a noise process of known variance σ.sup.2. All the random variables (ν.sub.j, A.sub.1, . . . , A.sub.ν.sub.j, Z.sub.j) involved in modelling a given observation X.sub.j are assumed independent. Let X.sub.1, X.sub.2, . . . , X.sub.N be N independent, identically distributed observations. Let X, Y, Z, A be the collections of X.sub.j, Y.sub.j, Z.sub.j, A.sub.j: 0≤j<N. Let the corresponding characteristic functions be ϕ.sub.X, ϕ.sub.Y, ϕ.sub.Z, ϕ.sub.A.

    3.3 Basic Form of Estimator

    [0030] We seek to invert the mapping from the distribution of photon energy A to the distribution of X. Our strategy is to first obtain the characteristic function of X in terms of ƒ.sub.A, then invert the mapping assuming the count-rate and noise characteristics are known. Let ϕ.sub.X, ϕ.sub.Y, ϕ.sub.Z, ϕ.sub.A be the characteristic functions of X, Y, Z, A. It is well known [15] that for the compound Poisson process Y with rate λ,

    [00026] ϕ Y ( u ) = e - λ e λ ϕ A ( u ) ( 40 )

    and since X=Y+Z then

    [00027] ϕ X ( u ) = ϕ Y ( u ) ϕ Z ( u ) . ( 41 )

    [0031] Given the observations x.sub.j we can form an empirical estimate {circumflex over (ϕ)}.sub.X of the characteristic function of X. Treating this as the true characteristic function, we can invert (40), (41) to obtain the characteristic function of A and then take the Fourier transform to find the amplitude spectrum ƒ.sub.A. Specifically, using (40), (41) and exploiting the assumption that Z is Gaussian to ensure ϕ.sub.Z(u) will be non-zero ∀u∈custom-character, we let γ: custom-character.fwdarw.custom-character be the curve described by

    [00028] γ ( u ) = ϕ X ( u ) e - λ ϕ Z ( u ) ( 42 ) = e λϕ A ( u ) ( 43 )

    [0032] Temporarily assuming ∀u, γ(u)≠0, after taking the distinguished logarithm of (43) and rearranging we have

    [00029] ϕ A ( u ) = 1 λ dlog ( γ ) ( u ) . ( 44 )

    [0033] Ideally, ƒ.sub.A is recovered by taking a Fourier transform

    [00030] f A ( x ) = - e - i 2 π ux ϕ A ( u ) du ( 45 )

    [0034] The basic form of our proposed estimator is given in (88) and is derived from (45) via a sequence of steps. First, ϕ.sub.X is estimated from the data (Step 1). Simply substituting this estimate for ϕ.sub.X in (42) does not produce an ISE optimal estimate of γ. The approximate ISE is obtained from an approximate estimate of the error distribution of ϕ.sub.X (Step 2). We then determine a sensible windowing function G(u) (in Step 3) and estimate γ by

    [00031] γ ^ ( u ) = G ( u ) ϕ ^ X ( u ) e - λ ϕ Z ( u ) . ( 46 )

    [0035] The windowing function G(u) is designed to minimise the approximate ISE between ƒ.sub.A and our estimate of ƒ.sub.A based on (44), (45) and (46), but with γ in (44) replaced by (46). A similar idea is used for estimating ϕ.sub.A from (44): a weighting function H(u) is found (in Step 4) such that replacing ϕ.sub.A in (45) by

    [00032] ϕ ^ A ( u ) = H ( u ) 1 λ dlog ( γ ^ ) ( 47 )

    produces a better estimate of ƒ.sub.A than using the unweighted estimate 1/λd log ({circumflex over (γ)}). Finally, the weighting function H(u) is modified (in Step 5) to account for the integral in (45) having to be replaced by a finite sum in practice. The following subsections expand on these five steps.

    3.4 Estimating ϕ.SUB.X

    [0036] An estimate of ϕ.sub.X(u) is required to estimate γ(u). In this subsection we define a histogram model and describe our estimation of ϕ.sub.X(u) based on a histogram of the x.sub.j values. Assume N intervals (and corresponding x.sub.j values) have been obtained from a finite length data sample. Although the empirical characteristic function

    [00033] ϕ ^ X emp ( u ) = 1 N .Math. j = 0 N - 1 e i u x j ( 48 )

    provides a consistent, asymptotically normal estimator of the characteristic function [21], it has the disadvantage of rapid growth in computational burden as the number of data points N and the required number of evaluation points u∈custom-character increases. Instead, we use a histogram based estimator that has a lower computational burden. Assume that a histogram of the observed X values is represented by the 2M×1 vector n, where the count in the mth bin is given by

    [00034] n m = .Math. k = 0 N - 1 [ m - 0.5 , m + 0.5 ) ( x k ) , m { - M , .Math. , M - 1 } . ( 49 )

    [0037] All bins of the histogram have equal width. The bin-width is chosen in relation to the magnitude of the x.sub.j values. Since the effect of choosing a different bin width is simply equivalent to scaling the x.sub.j values, we assume the bin-width to be unity without loss of generality. The bins are apportioned equally between non-negative and negative data values. The number of histogram bins 2M influences the estimator in various ways, as discussed in later subsections. For now, it is sufficient to assume that 2M is large enough to ensure the histogram includes all x.sub.j values. We estimate ϕ.sub.X(u) by forming a histogram of scaled x.sub.j values and take the inverse Discrete Fourier transform i.e.,

    [00035] ϕ X ( u ) = .Math. m = - M M - 1 n m N e i 2 π u m 2 M . ( 50 )

    [0038] This is a close approximation of the empirical characteristic function but where x.sub.j terms have been rounded to the nearest histogram bin centre (and u contracted by a factor of 2π). The term n.sub.m simply counts the number of rounded terms with the same value. Clearly, this function can be efficiently evaluated at certain discrete points u∈−M, . . . , M−1 using the fast Fourier Transform (FFT).

    3.5 Error Distribution of {circumflex over (ϕ)}.SUB.X

    [0039] The design of the filters G(u) and H(u) in (46) and (47) rely on the statistics of the errors between {circumflex over (ϕ)}.sub.X and the true characteristic function. In this subsection we define and describe the characteristics of these errors. We assume the density function ƒ.sub.X is sufficiently smooth (i.e., |d.sup.nƒ.sub.X(u)/du.sup.n|≤C.sub.n∈custom-character∀n∈custom-character) and that the width of the histogram bins are sufficiently small (relative to the standard deviation of the additive noise Z) such that the errors introduced by rounding x.sub.j values to the centre of each histogram bin are approximately uniformly distributed across each bin, have zero mean and are small relative to the peak spreading caused by Z. In other words, the source of error arising from the binning of x.sub.j values is considered negligible. Due to both the statistical nature of Poisson counting and the expected count in each bin being non-integer (custom-character[n.sub.m]∈custom-character.sub.≥0), discrepancies exist between the observed number of counts in any given histogram bin and the expected number of counts for that bin. We combine these two sources of error in our model and refer to it as ‘histogram noise’. We emphasize that this noise is distinct from the additive noise Z modelled in (11), which causes peak spreading in the histogram. Let the probability that a realization of X falls in the m-th bin be

    [00036] px m = Pr ( m - 0.5 X < m + 0.5 ) ( 51 )

    [0040] Let the normalized histogram error ϵ.sub.m in the m-th bin be the difference between the observed count n.sub.m and the expected count custom-character[n.sub.m]=Npx.sub.m in the mth bin, relative to the total counts in the histogram N i.e.,

    [00037] ϵ m = n m - N p x m N ( 52 )

    [0041] Using (50), (51) and (52) we have

    [00038] ϕ ^ X ( u ) = .Math. m = - M M - 1 n m N e i 2 π u m 2 M = .Math. m = - M M - 1 px m e i 2 π u m 2 M + .Math. m = - M M - 1 ϵ m e i 2 π u m 2 M ϕ X ( u ) + ϕ ϵ ( u ) ( 53 ) ( 54 ) ( 55 )

    [0042] If the histogram is modelled as a Poisson vector, show that

    [00039] �� [ ϵ i ] = 0 ( 56 ) �� [ ϵ i ϵ j ] = { px j N i = j 0 i j ( 57 ) �� [ .Math. ϕ ϵ | 2 ] = 1 N . ( 58 )

    [0043] Since the characteristics of the histogram noise can be expressed in terms of the total number of observed intervals N, the impact of using observation data of finite length may be accounted for by incorporating this information into the design of G(u) and H(u).

    3.6 Estimating γ

    [0044] Having obtained {circumflex over (ϕ)}.sub.X, the next task is to estimate γ. Rather than substitute {circumflex over (ϕ)}.sub.X(u) for ϕ.sub.X(u) in (42), we instead use (46) as the estimator, which requires us to choose a windowing function G(u). In this subsection we attempt to find a function G(u) that is close to optimal. When the distribution of errors in {circumflex over (ϕ)}.sub.X(u) are considered, the windowing function G(u)=G.sub.opt(u) that results in the lowest ISE estimator of the form given in (46) is

    [00040] G opt ( u ) = 1 1 + e - 2 λ.Math. { ϕ A ( u ) } N e - 2 λ .Math. ϕ Z ( u ) .Math. 2 ( 59 )

    where custom-character{z} denotes the real component of z∈custom-character. We cannot calculate G.sub.opt(u) since ϕ.sub.A(u) is unknown, so instead we attempt to find an approximation. We let

    [00041] G ( u ) = 1 1 + 1 N e - 2 λ .Math. ϕ Z .Math. 2 ( u ) . ( 60 )

    [0045] This is justified by considering the magnitude of the relative error between the functions g.sub.opt(u) and g.sub.1(u) where

    [00042] g opt ( u ) = 1 + e - 2 λ.Math. { ϕ A ( u ) } N e - 2 λ .Math. ϕ Z .Math. 2 ( u ) ( 61 ) g 1 ( u ) = 1 + 1 N e - 2 λ | ϕ Z | 2 ( u ) . ( 62 )

    [0046] The magnitude of the relative error is given by

    [00043] .Math. g opt - g 1 g 1 .Math. = .Math. e - 2 λ.Math. { ϕ A } - 1 N e - 2 λ | ϕ Z | 2 + 1 .Math. . ( 63 )

    [0047] Since custom-character{ϕ.sub.A}∈[−1,1], we see the right hand side of (63) is maximum when custom-character{ϕ.sub.A(u)}=−1. The relative error is thus bound by

    [00044] .Math. g opt - g 1 g 1 .Math. e 2 λ - 1 N e - 2 λ | ϕ Z | 2 + 1 ( 64 )

    which justifies the approximation when λ is small, or when N|ϕ.sub.Z|.sup.2(u)>>e.sup.4λ. Furthermore, we note that the above bound is quite conservative. The distribution of photon energies in spectroscopic systems can typically be modelled as a sum of K Gaussian peaks, where the kth peak has location μ.sub.k and scale σ.sub.k i.e.,

    [00045] f A ( x ) = .Math. k = 0 K - 1 α k 1 2 π σ k e - ( x - μ k ) 2 2 σ k 2 where ( 65 ) .Math. k = 0 K - 1 α k = 1. ( 66 )

    [0048] Consequently, the characteristic function will have the form

    [00046] ϕ A ( u ) = .Math. k = 0 K - 1 α k e - 2 π 2 σ k 2 u 2 e i 2 π μ k u . ( 67 )

    i.e., oscillations within an envelope that decays as e.sup.−cu.sup.2 for somec>0. The upper bound given by (64) is quite conservative since |custom-character{ϕ.sub.A}|>>1 for most values of u. The approximation error will be significantly smaller at most evaluation points across the spectrum. Having chosen G(u), we can form an estimate of γ using (46). The windowing function reduces the impact of histogram noise arising from the finite number of data samples. For large values of Ne.sup.−2λ|ϕ.sub.Z(u)|.sup.2, the impact of windowing is negligible and the estimator is essentially the same as using (42) directly. However, in the regions where

    [00047] ln N < 2 λ + 4 π 2 σ 2 u 2 ( 68 )

    the windowing becomes significant, and acts to bound our estimate of γ i.e., Using the fact that the noise Z is Gaussian (so ϕ.sub.Z(u)∈custom-character and hence |ϕ.sub.Z|.sup.2=ϕ.sub.Z.sup.2), and since e.sup.−2λ>0 we see that

    [00048] .Math. γ ^ ( u ) .Math. = | ϕ ^ X ( u ) e - λ ϕ Z ( u ) 1 1 + 1 N e - 2 λ .Math. ϕ Z .Math. 2 ( u ) | = .Math. ϕ ^ X ( u ) 1 e - 2 λ ϕ Z 2 ( u ) + ( ϕ Z 2 ( u ) N .Math. ϕ Z .Math. 2 ( u ) ) .Math. < N . ( 69 ) ( 70 ) ( 71 )

    [0049] This ensures the argument to the distinguished logarithm in (47) remains finite even though lim.sub.u.fwdarw.∞ϕ.sub.Z(u)=0.

    3.7 Estimating ϕ.SUB.A

    [0050] Once {circumflex over (γ)} has been obtained, we proceed to estimate ϕ.sub.A using (47). This requires another windowing function H(u). In this subsection we find a function H(u) for estimating ϕ.sub.A that is close to ISE optimal. We begin by defining a function ψ(u) for notational convenience

    [00049] ψ ( u ) = 1 G ( u ) e - λ ϕ Z ( u ) . ( 72 )

    [0051] The ISE is minimized when H(u)=H.sub.opt(u), where the optimal filter H.sub.opt(u) is given by

    [00050] H opt ( u ) = ϕ A ( u ) ϕ ^ A ( u ) = 1 λ d log ( ϕ X e - λ ϕ Z ) ( u ) 1 λ d log ( ϕ X + ϕ ϵ ψ ) ( u ) = ϕ A ( u ) ϕ A ( u ) + 1 λ d log ( ϕ X ϕ X ) ( u ) + 1 λ d log ( G ) ( u ) ( 73 ) ( 74 )

    [0052] Again, we cannot calculate the optimal filter by using (73)-(74) since ϕ.sub.X(u), ϕ.sub.A(u) and ϕ.sub.ϵ(u) are unknown. We instead make the following observations to obtain an approximation of the ISE-optimal filter.

    3.7.1 Initial Observations

    [0053] The optimal filter remains close to unity as long as the estimated {circumflex over (ϕ)}.sub.A(u) remains close to the true value of ϕ.sub.A(u). This will invariably be the case for small values of u since

    [00051] �� [ .Math. ϕ ϵ ( u ) .Math. ] π 4 N .Math. ϕ X ( u ) .Math. 1 for small u ( 75 ) ( 76 )

    [0054] Furthermore, equation (73) shows that if |ϕ.sub.ϵ(u)|≤≤|ϕ.sub.X(u)|, then {circumflex over (ϕ)}.sub.X(u)=ϕ.sub.X(u)+ϕ.sub.ϵ(u)≈ϕ.sub.X(u) so H.sub.opt(u)≈1. For larger values of u, when the magnitude of |ϕ.sub.X(u)| becomes comparable to or less than |ϕ.sub.ϵ(u)|, the estimator

    [00052] ϕ ^ A ( u ) = 1 λ d log ( ( ϕ X + ϕ ϵ ) / ψ ) ( u )

    is dominated by noise and no longer provides useful estimates of ϕ.sub.A(u). In the extreme case |ϕ.sub.X(u)|<<ϕ.sub.ϵ(u)| so |{circumflex over (ϕ)}.sub.X(u)|≈|ϕ.sub.ϵ(u)| and hence

    [00053] { ϕ ^ A } 1 λ ln .Math. ϕ ϵ ψ .Math. ( 77 )

    [0055] The window H(u) should exclude these regions from the estimate, as the bias introduced in doing so will be less than the variance of the unfiltered noise. Unfortunately the estimate of ϕ.sub.A(u) can be severely degraded well before this boundary condition is reached, so (77) is not particularly helpful. A more useful method for detecting when noise begins to dominate is as follows.

    3.7.2 Filter Design Function

    [0056] Further manipulation of (67) shows that for typical spectroscopic systems, the magnitude of ϕ.sub.A will have the form

    [00054] .Math. ϕ A .Math. 2 ( u ) = .Math. k = 0 K - 1 α k 2 e - 4 π 2 σ k 2 u 2 + .Math. k = 0 K - 1 .Math. j = 0 K - 1 j k α k α j cos ( 2 π ( μ k - μ j ) u ) e - 2 π 2 ( σ k 2 + α j 2 ) u 2 ( 78 )

    i.e.; a mean component that decays according to the peak widths σ.sub.k, and a more rapidly decaying oscillatory component that varies according to the location of the spectral peaks μ.sub.k. In designing the window H(u), we are interested in attenuating the regions in |{circumflex over (ϕ)}.sub.A| where |ϕ.sub.A|.sup.2≲|ϕ.sub.ϵ/ψ|.sup.2, i.e., where the signal power is less than the histogram noise that has been enhanced by the removal of ϕ.sub.Z during the estimation of γ. To obtain an estimate of |ϕ.sub.A|, a low-pass, Gaussian shaped filter H.sub.lpf(u) is convolved with |{circumflex over (ϕ)}.sub.A| to attenuate all but the slowly varying, large scale features of |{circumflex over (ϕ)}.sub.A|. We denote this |{circumflex over (ϕ)}.sub.Asmooth|(u)

    [00055] .Math. ϕ ^ Asmooth .Math. ( u ) = .Math. 1 λ d log ( γ ^ ) ( u ) .Math. .star-solid. H 1 pf ( u ) . ( 79 )

    [0057] We see that |ϕ.sub.ϵ(u)| has a Rayleigh distribution with scale parameter

    [00056] σ R a y = 1 2 N .

    Consequently

    [0058] [00057] 1 λ .Math. ϕ ϵ ( u ) .Math. ψ ( u ) Rayleigh ( σ Ray = 1 λψ ( u ) 2 N ) . ( 80 )

    [0059] It is well known that the cumulative distribution function of a Rayleigh distributed random variable X.sub.Ray is given by

    [00058] F Ray ( x ; σ Ray ) = Pr ( X Ray < x ; σ Ray ) ( 81 ) = 1 - e - x 2 2 σ Ray 2 . ( 82 )

    [0060] Hence, to assist with computing the window H(u) we will make use of the function

    [00059] α min ( u ) = 1 - e - 1 2 .Math. ϕ ^ Asmooth .Math. ( u ) 2 λ 2 ψ ( u ) 2 2 N ( 83 ) Pr ( 1 λ .Math. ϕ ϵ ( u ) .Math. ψ ( u ) < .Math. ϕ ^ Asmooth .Math. ( u ) ) ( 84 )

    to control the shape of H(u). The function α.sub.min(u) provides an indication of how confident we can be that the estimate {circumflex over (ϕ)}.sub.A(u) contains more signal energy than noise energy. The approximation in (84) arises from the fact that |{circumflex over (ϕ)}.sub.Asmooth| is also a random variable slightly affected by the noise ϵ. On occasion—particularly for larger values of |u|—the histogram noise may result in sufficiently large values of α.sub.min(u) to give a false sense of confidence, and potentially allow noisy results to corrupt the estimate of ϕ.sub.A. To overcome this problem, the function was modified to be uni-modal in u

    [00060] α m o d ( u ) = inf { α min ( υ ) , .Math. υ .Math. < .Math. u .Math. } ( 85 )

    [0061] This modification was justified on the assumption that Gaussian noise causes ϕ.sub.Z(u) to be decreasing in |u|. Consequently we expect custom-character[|ϕ.sub.ϵ(u)|/ψ(u)] to be increasing in |u|. If we ignore the local oscillations in ϕ.sub.A(u) that are due to peak locations in ƒ.sub.A(x), the envelope approximated by the smoothed |ϕ.sub.Asmooth|(u) will be non-increasing in |u|. Equation (74) indicates the optimal window has the form λϕ.sub.A(u)/(λϕ.sub.A(u)+d log({circumflex over (ϕ)}.sub.X/ϕ.sub.X)(u)+d log(G)(u), so the overall window shape will be decreasing in |u|. Hence, if the estimated characteristic function in the region of some u.sub.0 (where the signal to noise ratio is high) has determined that the window value should be H(u.sub.0)<1, then it is reasonable to reject the suggestion that in the region u.sub.1>u.sub.0 (where the signal to noise ratio will be worse) that H(u.sub.1)>H(u.sub.0). Using the knowledge that |H.sub.opt(u)| should be close to unity for small |u|, close to zero for large |u|, and should ‘roll off’ as the signal-to-noise-ratio decreases—we consider two potential windowing functions as approximations of H.sub.opt(u).

    3.7.3 Rectangle Window

    [0062] The indicator function provides a very simple windowing function

    [00061] H ( u ) = 1 { α mod ( u ) > α 0 } ( u ) . ( 86 )

    [0063] The threshold value α.sub.0 determines the point at which cut-off occurs, and can be selected manually as desired (e.g., α.sub.0=0.95). Once the threshold is chosen, the estimator exhibits similar ISE performance regardless of peak locations in the incident spectra. Rather than requiring the user to select a window width depending on the incident spectrum.sup.1, the width of the window is automatically selected by the data via α.sub.mod(u). While simplicity is the primary advantage of the rectangular window, the abrupt transition region provides a poor model for the roll-off region of the optimal filter. The second filter shape attempts to improve on that. .sup.1Gugushvili [18] proposed a non-parametric estimator for the general decompounding problem in which a rectangular windowing scheme was used. It requires manual selection of window width, which varies with ϕ.sub.A.

    3.7.4 Logistic Window

    [0064] A window based on the logistic function attempts to model smoother roll-off. It is given by

    [00062] H ( u ) = 1 + e - β 0 ( 1.0 - α 0 ) 1 + e - β 0 ( α mod ( u ) - α 0 ) ( 87 )

    where α.sub.0 again acts as a threshold of acceptance of the hypothesis that the signal energy is greater than the noise energy in the estimate {circumflex over (ϕ)}.sub.A(u). The rate of filter roll off in the vicinity of the threshold region is controlled by β.sub.0>0. This provides a smoother transition region than the rectangle window, reducing Gibbs oscillations in the final estimate of ϕ.sub.A. Once again, although the parameters α.sub.0, β are chosen manually, they are much less dependent on ϕ.sub.A and can be used to provide close to optimal filtering for a wide variety of incident spectra. Typical values used were α.sub.0=0.95, β.sub.0=40.0. The performance of the rectangle and logistic window functions are compared in section 4.

    3.8 Estimating ƒ.SUB.A

    [0065] Having designed a window function H(u) and thus an estimator {circumflex over (ϕ)}.sub.A(u), the final task is to estimate ƒ.sub.A(x) by inverting the Fourier transform. This sub-section describes several issues that arise with numerical implementation. Firstly, it is infeasible to evaluate {circumflex over (ϕ)}.sub.X, {circumflex over (γ)}(u) and {circumflex over (ϕ)}.sub.A numerically on the whole real line. Instead we estimate it at discrete points over a finite interval. The finite interval is chosen sufficiently large such that a tolerably small error is incurred as a result of excluding signal values outside the interval. This is justified for ƒ.sub.A(x) being a Gaussian mixture, since the magnitudes of ϕ.sub.X and ϕ.sub.A will decay as e.sup.−cu.sup.2 for some c>0. The Fast Fourier Transform (FFT) is used to evaluate {circumflex over (ϕ)}.sub.X at discrete points, and hence also determines the points where {circumflex over (γ)}(u) and {circumflex over (ϕ)}.sub.A are evaluated. Likewise, the FFT is used to evaluate the final estimate of {circumflex over (ƒ)}.sub.A at discrete points. In order to use the FFT, the signals outside the interval should be sufficiently small to reduce impact of aliasing. The evaluation points also need to be sufficiently dense to avoid any ‘phase wrap’ ambiguity when evaluating d log({circumflex over (γ)})(u). Both these objectives can be achieved by increasing the number of bins 2M in the histogram (zero-padding) until a sufficiently large number of bins is attained. As M increases, the sampling density of {circumflex over (γ)} increases, which allows phase wrapping to be detected and managed. A larger M also allows aliasing (caused by the Gaussian shaped tails of |ϕ.sub.X|) to be negligible. Typically a value of M was chosen as the smallest power of two sufficiently large such that the non-zero values of the histogram were confined to the ‘lower half’ indexes i.e., M=min{M: n.sub.m=0, |m|∈{M/2, . . . , M}, M=2.sup.N, N∈custom-character}. Secondly, the distinguished logarithm in (47) is undefined if {circumflex over (γ)}(u)=0. In estimating γ(u) from the data, there is a small but non-zero probability that the estimate will be zero. In this case, the distinguished logarithm in (47) is undefined and the technique fails. As |u| increases, |ϕ.sub.X|(u) decreases and may approach |ϕ.sub.ϵ|(u). When |ϕ.sub.X|(u) and |ϕ.sub.ϵ|(u) have similar magnitudes, the probability of |ϕ.sub.X+ϕ.sub.ϵ| (and hence {circumflex over (γ)}) being close to zero can become significant. The filter H(u) should roll off faster than |ϕ.sub.X|(u) approaches |ϕ.sub.ϵ|(u) to reduce the impact this may have on the estimate. Ideally H(u) should be zero in regions where noise may result in |{circumflex over (γ)}|(u) being close to zero. Gugushvili has shown [18] that for a rectangular window, the probability of inversion failure approaches zero as the length of the data set increases N.fwdarw.∞.

    3.9 Discrete Notation

    [0066] We digress momentarily to introduce additional notation. Throughout the rest of the paper, bold font will be used to indicate a 2M×1 vector corresponding to a discretely sampled version of the named function, e.g., {circumflex over (ϕ)}.sub.A represents a 2M×1 vector whose values are given by the characteristic function {circumflex over (ϕ)}.sub.A(u) evaluated at the points u∈{0, 1, . . . , M−1, −M, . . . , −2, −1}. Square bracket notation [k] is used to index a particular element in the vector, e.g., {circumflex over (ϕ)}.sub.A[M−1] has the value of {circumflex over (ϕ)}.sub.A(M−1). We also use negative indexes for accessing elements of a vector in a manner similar to the python programming language. Negative indexes should be interpreted relative to the length of the vector, i.e., {circumflex over (ϕ)}.sub.A[−1] refers to the last element in the vector (which is equivalent to {circumflex over (ϕ)}.sub.A[2M−1]).

    3.10 Summary of Estimator

    [0067] The estimation procedure we use may be summarized in the following steps. [0068] 1. Partition sampled time series into intervals using (8). [0069] 2. Calculate x.sub.j value for each interval according to (9). [0070] 3. Generate histogram n from the x.sub.j values. [0071] 4. Calculate {circumflex over (ϕ)}.sub.X using the inverse FFT to efficiently evaluate (50) at various sample points. [0072] 5. Calculate ϕ.sub.Z and G at the appropriate points. [0073] 6. Calculate {circumflex over (γ)} via (46) using {circumflex over (ϕ)}.sub.X, G and ϕ.sub.Z. [0074] 7. Calculate |ϕ.sub.Asmooth(u)|, a low-pass filtered version of

    [00063] .Math. 1 λ d log ( γ ^ ) ( u ) .Math. . [0075] 8. Calculate α.sub.mod via (83) and (85). [0076] 9. Calculate H using α.sub.mod and either (86) or (87). [0077] 10. Calculate {circumflex over (ϕ)}.sub.A via (47) using {circumflex over (γ)} and H. If any element of {circumflex over (γ)} is zero and the corresponding element of H is non-zero, the estimation has failed as the distinguished logarithm is undefined. [0078] 11. Calculate {circumflex over (ƒ)}.sub.A using the FFT of {circumflex over (ϕ)}.sub.A according to

    [00064] f ^ A [ k ] = 1 2 M .Math. m = - M M - 1 ϕ ^ A [ m ] e - i 2 π mk 2 M . ( 88 )

    3.11 Performance Measures

    [0079] The performance of the estimator is measured using the integrated square of the error (ISE). The ISE measures the global fit of the estimated density.

    [00065] ISE ( f ^ A , f A ) = - ( f ^ A ( x ) - f A ( x ) ) 2 dx ( 89 )

    [0080] The discrete ISE measure is given by

    [00066] ISE ( p ^ A , p A ) = .Math. m = - M M - 1 ( p ^ A [ m ] - p A [ m ] ) 2 ( 90 )

    where p.sub.A is a 2M×1 vector whose elements contain the probability mass in the region of each histogram bin i.e.,

    [00067] p A [ m ] = m - 0.5 m + 0.5 f A ( x ) dx . ( 91 )

    [0081] The vector {circumflex over (p)}.sub.A represents the corresponding estimated probability mass vector.

    4 NUMERICAL RESULTS OF THE FIRST EMBODIMENT

    [0082] Experiments were performed using simulated and real data.

    4.1 Simulations

    [0083] The ideal density used by Trigano et al. [11] was used for these simulations. It consists of a mixture of six Gaussian and one gamma distribution to simulate Compton background. The mixture density is given by

    [00068] f 0.5 g + 10 �� ( 40 , 1 ) + 10 �� ( 112 , 1 ) + 1 �� ( 50 , 2 ) + 1 �� ( 63 , 1 ) + 2 �� ( 140 , 1 ) ( 92 )

    where custom-character(μ, σ.sup.2) is the density of a normal distribution with mean μ and variance σ.sup.2. The density of the gamma distribution is given by g(x)=(0.5+x/200)e.sup.−(0.5+x/200). The density was sampled at 8192 equally spaced integer points to produce the discrete vector p.sub.A of probability mass. The FFT was taken to obtain ϕ.sub.A, a sampled vector of ϕ.sub.A values. [0084] A particular count rate λ was chosen for an experiment, corresponding to the expected number of events per observation interval. The expected pile-up density was obtained via (40). i.e., the discrete vector ϕ.sub.A was scaled by λ, exponentiated, then scaled by e.sup.−λ and finally an FFT was applied

    [00069] p Y [ m ] = FFT ( e - λ e λϕ A ) [ m ] . ( 93 )

    [0085] Equation (93) was convolved with a Gaussian to simulate the effect of noise Z smearing out the observed spectrum

    [00070] p X = p Y .star-solid. 1 2 π σ e - m 2 2 σ 2 . ( 94 )

    [0086] This represents the expected density of the observed spectrum, including pile-up and additive noise. Observation histograms were created using random variables that were distributed according to (94). Experiments were parameterized by the pair (N, λ) where N∈{10.sup.4, 10.sup.5, 10.sup.6, 10.sup.7, 10.sup.8, 10.sup.9} and λ∈{1.0, 3.0, 5.0}. For each parameter pair (N, λ), one thousand observed histograms were made. Estimates of the probability mass vector p.sub.A were made using (88) with both (86) and (87) used for H(k). A threshold value of α.sub.0=0.95 was used for both window shapes, and β.sub.0=40.0 for the logistic shape. The discrete ISE measure of the error between each estimate {circumflex over (p)}.sub.A and the true vector p.sub.A were recorded. For comparison with asymptotic bandwidth results, estimates were made using a rectangular window whose bandwidth was selected according to the condition 1.3 specified by Gugushvili in [18] i.e., h.sub.N=(ln N).sup.−β where β<½. We emphasize that the β of Gugushvili's filter is not to be confused with the β.sub.0 of (87). The asymptotic bandwidth criterion was implemented by using

    [00071] H [ k ] = 1 { .Math. k .Math. < α 0 } [ k ] ( 95 ) where α 0 = M π ( ln N ) β . ( 96 )

    [0087] Three values for Gugushvilli's β were trialed, namely β=½, ⅓, ¼. [0088] Estimates were also made using a rectangular filter (95) with fixed bandwidths of various values α/M∈[0.2, 0.4, 0.6, 0.8]. Finally time-series data was created according to (1) with an idealised rectangular pulse shape and 10.sup.7 pulses whose energies were distributed according to (92). The pulse length and count rate were chosen to give a Poisson rate Δ=1.0. The algorithm described by Trigano et al. [11] was used to estimate the underlying amplitude density from a bi-dimensional histogram containing 32×1024 (duration×energy) bins—this choice of bins reportedly giving the best accuracy and reasonable execution times. The performance and processing time of the core algorithm were recorded for comparison with our proposed algorithm. FIG. 2 plots a typical estimate {circumflex over (p)}.sub.A made by the data-driven logistic shaped filter for an experiment with parameter pair (N=10.sup.6, λ=3.0). The true vector p.sub.A (thin solid line), and the observed histogram {circumflex over (p)}.sub.X (lower curve containing some noise) are also plotted. Pile-up peaks can be clearly seen in the observed histogram. Although the estimated density suffers from ringing (due to the Gibbs phenomenon), it otherwise estimates the true density and corrects the pile-up that was present in the observed histogram. FIG. 3 plots a typical estimate made at the same operating point as FIG. 2, but with an estimator having a rectangular filter where the bandwidth was selected using (96) and β=¼. This corresponds to the operating region in FIG. 6 where the performance of the fixed bandwidth filter (β=¼) is approaching that of the data-driven filters. It is evident that while also correcting pile-up, the resulting estimate contains more noise. FIG. 4 shows the distribution densities of ISE measures as a function of sample count using a rectangular filter and various fixed bandwidths. Lines were plotted between distribution means (MISE) to assist visualization. The results for the data-driven rectangular filter (86) were also plotted, connected with a thicker curve. This clearly illustrates the weakness of fixed bandwidth filtering. For any fixed bandwidth, the ISE decreases as sample count increases, eventually asymptoting as the bias becomes the dominant source of error. At that point (which is noise and bandwidth dependent) the ISE remains largely constant despite increases in sample count. The fixed bandwidth excludes the use of some estimates {circumflex over (ϕ)}.sub.A[k] in the final calculation, even when they have high signal-to-noise-ratio (SNR). FIG. 4 also shows the results given by the rectangular filter with our proposed data-driven bandwidth selection. This curve lies close to the inflection point of each fixed bandwidth curve. This indicates the bandwidth selected for the data-driven rectangular filter is close to the optimal bandwidth value (for a rectangular filter) across the range of sample counts. FIG. 5-FIG. 7 show the distribution densities of the ISE measure as a function of the total number of estimates N in each histogram at three count rates λ∈{1.0, 3.0, 5.0}. The MISE curves for the logistic and rectangular filters are lower than those obtained using the bandwidth given by (96) for much of the region of application interest. There are various regions where the non-data-driven bandwidth (β=¼) gives similar performance to the data-driven bandwidths, however this is not maintained across the whole range of sample counts. The logistic filter shape has slightly better performance than the rectangular filter shape, although the differences between the two filters appears relatively minor to the ISE measure. Table 1 compares the results between the proposed algorithm and the algorithm recently described in [11]. The ISE for both methods were similar at the operating point under test (λ=1.0, N=10.sup.7), however our proposed algorithm requires considerably less computation.

    TABLE-US-00001 TABLE 1 Comparison With Algorithm Described in [11] Algorithm Avg. ISE Avg. Time (sec) Fast Trigano Algorithm 1.3 × 10.sup.−5 3.19 32 × 1024 (duration × energy) bins Proposed Algorithm .sup. 1 × 10.sup.−5 0.019

    4.2 Real Data

    [0089] The estimator was applied to real data to assess its usefulness in practical applications. The threshold value ϵ found in (8) was chosen to be approximately one half the standard deviation of the additive noise w(t). This ensured a reasonably high probability of creating intervals, yet ensured errors in the estimation of interval energy were low. A value for the interval length L was chosen approximately four times the ‘length’ of a typical pulse—that is, four times the length of the interval {t: Φ(t)>ϵ}. An energy histogram was obtained from a manganese sample, with a photon flux rate of nominally 10.sup.5 events per second. A slight negative skew was present in the shape of the main peaks of the observed histogram, suggesting a complicated noise source had influenced the system. This is barely visible in FIG. 8. The noise was modelled as a bimodal Gaussian mixture rather than a single Gaussian peak. A simple least-squares optimization routine was used to fit bimodal Gaussian parameters Z˜α.sub.1custom-character(μ.sub.1, σ.sub.1.sup.2)+α.sub.2custom-character(μ.sub.2, σ.sub.2.sup.2) to the noise peak located around bin index zero. A suitable value for λ was chosen manually. The logistic filter with data-driven bandwidth was used to estimate the true density. FIG. 8 shows plots of the observed and estimated probability mass vectors. The main peaks (bins 450˜600) have been enhanced while the pile-up has been attenuated though not fully removed. The first order pile-up peaks have been reduced. The peak-to-pile-up ratio (ratio of the height of main peak to that of first pile-up peak) has increased from around 6 to around 120. These improvements are comparable to other state of the art systems (e.g., [11]). There are several possible reasons the estimator fails to fully resolve pile-up. The accuracy of the estimator depends on correctly modelling the Gaussian noise peak. The bimodal Gaussian mixture modelled the noise peak such that the maximum error was less than 1% of the noise density peak. Given that the residual pile-up peaks in the estimated spectrum are below 1% of the main peak, the sensitivity of the estimator to errors in noise modelling may have contributed to this in some part. A second reason for the unresolved pile-up may be due to the uncertainty in the estimation of the observed spectrum. Several of the residual pile-up peaks are relatively close to the floor of the observed histogram. The residual peaks may simply be a noise induced artefact of the estimator. Finally, the mathematical model may be an overly simple approximation of the observed spectrum. The detection process includes numerous second-order effects that have not been included in the model (e.g., ballistic deficit, supply charge depletion, correlated noise, non-linearities, etc. . . . ). These minor effects may limit the accuracy of the pile-up correction estimator.

    5 SUMMARY OF THE FIRST EMBODIMENT

    [0090] We have taken the estimator proposed by Gugushvili [18] for decompounding under Gaussian noise, and adapted it for correcting pulse pile-up in X-ray spectroscopy. We have proposed a data-driven bandwidth selection mechanism that is easily implemented, and provides significant reduction in ISE/MISE across a broad range of sample counts of interest to spectroscopic applications (10.sup.4˜10.sup.9 counts). The data-driven rectangular bandwidth selection is close to optimal (for rectangular filters), and over the range of interest outperforms bandwidth selection based on asymptotic results or fixed bandwidth. [0091] Although initial results appear promising, further work is required to improve the performance for practical implementations. The estimation still contains ‘ringing’ artefacts associated with the Gibbs phenomenon. Additional filter shape attempts to reduce this, there are other shapes that are closer to MSE optimal.

    6 SECOND EMBODIMENT

    [0092] This section gives a summary of the spectrum estimator of the 2nd embodiment. The 2nd embodiment solves the problem of the 1st embodiment which requires entire clusters to be approximately encompassed in each interval. In the 2nd embodiment, the entire data series if desired can be used, and the overlap is compensated by introduction of 2 different interval lengths L and L1. [0093] We need to include a few additional terms not mentioned in the first embodiment. In particular {circumflex over (ϕ)}.sub.X1. The spectrum estimator is based on

    [00072] f ^ A ( x ) = 1 2 πλ - e - iux H ( u ) d log ( γ ) ( u ) du . ( 97 )

    [0094] The introduction of the filter H(u; α) allows us to address several implementation issues that arise. The estimation procedure we use may be summarized in the following steps. [0095] 1. Partition sampled time-series into fixed length intervals [T.sub.j, T.sub.j+L), j∈custom-character [0096] 2. Calculate x.sub.j value for each interval according to x.sub.j=Σ.sub.k∈[T.sub.j.sub., T.sub.j.sub.+L)s(k). [0097] 3. Generate histogram n from the x.sub.j values. [0098] 4. Calculate {circumflex over (ϕ)}.sub.X using the inverse FFT of n. [0099] 5. Partition the sampled time series into a different set of intervals with length L.sub.1, and following similar calculations to obtain {circumflex over (ϕ)}.sub.X1. [0100] 6. Calculate ϕ.sub.Z and G. [0101] 7. Calculate {circumflex over (γ)} using {circumflex over (ϕ)}.sub.X, G, ϕ.sub.Z and {circumflex over (ϕ)}.sub.X1,

    [00073] γ ^ = ϕ ^ X ϕ Z e - λ ϕ ^ X 1 ( 98 ) [0102] 8. Calculate |ϕ.sub.Asmooth(u)|, a low-pass filtered version of

    [00074] .Math. 1 λ d log ( γ ^ ) ( u ) .Math. . [0103] 9. Calculate α.sub.mod. [0104] 10. Calculate H using α.sub.mod. [0105] 11. Calculate {circumflex over (ϕ)}.sub.A using {circumflex over (γ)} and H. If any element of {circumflex over (γ)} is zero and the corresponding element of H is non-zero, the estimation has failed as the distinguished logarithm is undefined. [0106] 12. Calculate {circumflex over (ƒ)}.sub.A using the FFT of {circumflex over (ϕ)}.sub.A according to

    [00075] f ^ A [ k ] = 1 2 M .Math. m = - M M - 1 ϕ ^ A [ m ] e - i 2 π mk 2 M . ( 99 )

    6.1 Algorithm Details

    [0107] Partition the detector output stream into a set of non-overlapping intervals of length L i.e., [T.sub.j, T.sub.j+L), T.sub.0∈custom-character.sub.≥0, T.sub.j+1≥T.sub.j+L, j∈custom-character.sub.≥0. Let x.sub.j be the sum of the detector output samples in the jth interval i.e., custom-character

    [00076] x j = .Math. k = T j T j + L - 1 s ( k ) ( 100 )

    [0108] Assuming L is greater than a pulse length, the jth interval may contain ‘complete’ pulses as well as pulses which have been truncated by the ends of the interval. It can be shown that x.sub.j consists of a superposition of the energy of ‘complete’ pulses which we denote custom-character, the energies of truncated pulses which we denote with custom-character.sub.1j and noise z.sub.j [0109] Let the detector output stream be partitioned into a second set of non-overlapping intervals [T.sub.1j, T.sub.1j+L.sub.1), T.sub.1,0∈custom-character.sub.≥0, T.sub.1,j+1≥T.sub.1,j+L, j∈custom-character.sub.≥0 where L.sub.1<L. Let x.sub.1j be given by

    [00077] x 1 j = .Math. k = T 1 j T 1 j + L 1 - 1 s ( k ) ( 101 )

    [0110] If L.sub.1 is chosen to be slightly less than the pulse length, the x.sub.1j term will contain no ‘complete’ pulses, but consist of a superposition of only the energies of truncated pulses y.sub.lj and noise z.sub.j. The number of truncated pulses in any interval has a Poisson distribution. We have

    [00078] X 1 = Y 1 + Z 1 , ( 102 ) ϕ X 1 = ϕ Y 1 ϕ Z 1 . ( 103 ) [0111] We can decompose the total energy in the interval [T.sub.j, T.sub.j, +L) into the energy contribution Y.sub.1 from pulses that have been truncated and the energy contribution Y.sub.0 from pulses that are fully contained in the interval [T.sub.j, T.sub.j+L) i.e.,

    [00079] X = Y 0 + Y 1 + Z 0 + Z 1 ( 104 )

    where Z.sub.0 represents noise in the regions where pulses are fully contained in the interval (a length of L−L.sub.1), and Z.sub.1 represents noise in the regions where pulses are truncated (a length of L.sub.1). Hence,

    [00080] ϕ X = ϕ Y 0 ϕ Y 1 ϕ Z 0 ϕ Z 1 . ( 105 ) [0112] By combining (103) with (105) we have

    [00081] ϕ X = ϕ X 1 ϕ Y 0 ϕ Z 0 ( 106 )

    [0113] Rearranging gives

    [00082] ϕ Y 0 = ϕ X ϕ X 1 ϕ Z 0 ( 107 ) = e - λ 0 e λ 0 ϕ A ( u ) ( 108 )

    [0114] We can estimate ϕ.sub.X.sub.1 in a similar manner that we estimated ϕ.sub.X or some other method, e.g., via the empirical characteristic function or by performing an FFT on the normalized histogram of x.sub.1j values. [0115] When performing the decompounding operation, the Poisson rate λ.sub.0 for the reduced interval length L−L.sub.1 is used to account for the sub-interval over which the compound Poisson process Y.sub.0 occurs.

    6.2 Visualization of Internal Quantities

    [0116] To aid the reader's understanding, FIG. 9 plots various quantities obtained during the estimation process. The upper blue curve (with a value around 0.3 at bin zero) plots |{circumflex over (ϕ)}.sub.X|, the estimated characteristic function of the observed spectrum. A brown curve is used to show the true value of |ϕ.sub.ƒ|, which is distinctly visible as the lower curve with periodic nulls in the region [6000, 10000]. The quantity |ϕ.sub.ϵ|/(λϕ.sub.Ze.sup.−λ) is shown in transparent red and appears as ‘noise’ whose average density peaks around bin #8000. The expected value of |ϕ.sub.ϵ|/(λϕ.sub.Ze.sup.−λ) is shown with a black dashed line. This is obtained using (75), the known value of λ, and assuming Gaussian noise with known σ to obtain ϕ.sub.Z(k). The quantity |{circumflex over (ϕ)}.sub.ƒ| is shown with a transparent blue curve. This is barely visible as it coincides closely with |ϕ.sub.ƒ| in the intervals [0, 4000], [12000, 16000], and closely with |ϕ.sub.ϵ|/(λϕ.sub.Ze.sup.−λ) in the interval [5000, 11000]. Note that the colour of |ϕ.sub.ϵ|/(λϕ.sub.Ze.sup.−λ) appears to change from red to purple in the interval [5000, 11000] as both transparent plots overlap. A solid black line shows |{circumflex over (ϕ)}.sub.fsmooth|, a low pass filtered version of |{circumflex over (ϕ)}.sub.ƒ|. The low pass filtering removes any local oscillations in |{circumflex over (ϕ)}.sub.ƒ(k)| due to the peak localities, as described in paragraph on smoothing at the beginning of this section. The term |{circumflex over (ϕ)}.sub.fsmooth(k)| serves as an estimate of custom-character[|ϕ.sub.ƒ(k)|]. It can be see that |{circumflex over (ϕ)}.sub.ƒ| provides a reasonably good estimate of |ϕ.sub.ƒ| in the region where |{circumflex over (ϕ)}.sub.fsmooth|>>custom-character[|ϕ.sub.ϵ|/(λϕ.sub.Ze.sup.−λ)]. As these two quantities approach each other, the quality of the estimate deteriorates until it is eventually dominated by noise. The filter H(k) should include good estimates of |ϕ.sub.ƒ| while excluding poor estimates. To find the regions where good estimates of |ϕ.sub.ƒ| are obtained we address the question: Given custom-character[|ϕ.sub.ϵ/(λϕ.sub.Ze.sup.−λ)], what is the probability that the calculated values of {circumflex over (ϕ)}.sub.ƒ in a local region arise largely from noise?

    7 COUNT RATE ESTIMATION

    [0117] The previous estimator assumed λ was known. An estimate of λ can be obtained without prior knowledge as follows. [0118] 1. Using {circumflex over (ϕ)}.sub.X, {circumflex over (ϕ)}.sub.X1, {circumflex over (ϕ)}.sub.Z, G from the previous section, calculate

    [00083] ϕ ^ Y = G ϕ ^ X ϕ Z ϕ X 1 ( 109 ) [0119] 2. Using {circumflex over (ϕ)}.sub.Y, estimate the count rate. This can be done a number of ways. [0120] 3. One way is to use an optimization routine or some other means to fit a curve to {circumflex over (ϕ)}.sub.Y. The fitted parameters can be used to obtain an estimate of the count rate. [0121] 4. Another way involves estimating the DC offset of Ψ=d log({circumflex over (ϕ)}.sub.Y). This can be done by averaging a suitable number of points of Ψ. The points obtained by filtering by H(u) in the previous section are usually suitable, although less points may also produce an adequate estimate. [0122] 5. Another way involves using an optimization engine or some other means to fit a curve to Ψ=d log({circumflex over (ϕ)}.sub.Y). A suitable parameterized curve to fit d log({circumflex over (ϕ)}.sub.Y) is given by

    [00084] f ( u ; λ , α , σ , μ ) = - λ + λ .Math. k = 0 K - 1 α k G ( σ k u ) e - j 2 π u μ k ( 110 ) where α = ( α 0 , .Math. , α K - 1 ) ( 111 ) σ = ( σ 0 , .Math. , σ K - 1 ) ( 112 ) μ = ( μ 0 , .Math. , μ K - 1 ) ( 113 ) [0123] and where K∈custom-character is chosen to allow the curve fit to sufficient accuracy. The parameter λ provides an estimate of the count rate. The optimization engine is not required to give equal weighting to each point in Ψ.

    8 DESCRIPTION OF FIGURES

    [0124] The following figures are to aid understanding of the process. FIG. 1 shows one possible scheme used to partition the detector output. The illustration depicts the sampled detector response to three incident photons. To aid clarity of the figure, the effects of noise have been removed. The output response has been partitioned into several regions of equal length (L). The number of pulses arriving in each region is unknown to the processing system. One pulse has arrived in the first interval. Two pulses have arrived in the second interval. No pulses have arrived in the third interval. The total photon energy arriving within each interval is calculated as the statistic of interest, being the sum of all sample values in each interval. Intervals are not temporally aligned with pulse arrivals. FIG. 2 illustrates the output of the estimation procedure. The true probability density of incident photon energy is plotted as a solid black line. The photons arrival rate is such that three photons on average arriving during any given interval [T.sub.j, T.sub.j+L). The standard deviation of additive noise in the detector output signal s(t) is equal to one histogram bin width. One million intervals were collected. A histogram was made of the total energy in each interval. This is plotted with the blue line. The effects of pile-up is clearly evident, particularly around bins 75, 150 and 225. The red trace plots the estimate of the true incident energy spectrum after the data has been processed by the system. Although some noise appears in the estimate, the effects of pile-up have been removed. The estimate is expected to correctly recover the true incident spectrum on average. This result was obtained using an internal filter whose bandwidth was determined automatically from the data. FIG. 3 illustrates the same quantities as FIG. 2 under the same operating conditions, however in this instance the bandwidth of the internal filter has been determined using asymptotic results from the literature. Although the estimated probability density of incident energies has been recovered, the variance is significantly greater compared to FIG. 2. FIG. 8 illustrates the operation of the system on real data. The blue trace plots the probability density of observed energy values, while the red trace plots the estimated true probability density of incident photon energies. There is no black trace as the true probability density is unknown. In this experiment, X-ray fluorescence of a manganese sample was used as a photon source. The photon arrival rate was around 10.sup.5 photons per second. The interval length was chosen such that the average time between photons corresponded to the length of two intervals. Sufficient data was collected and partitioned to form 5.9×10.sup.6 intervals. The standard deviation of the additive noise corresponds to 4.7 histogram bins. The estimation process has clearly reduced the pile-up peaks and enhanced the true peaks. FIG. 9 illustrates various quantities obtained during the simulation of the system described in the 2nd embodiment. It is described in section 5.1 Visualization of Internal Quantities. FIGS. 9-12 relate to the 2nd embodiment. FIG. 10 illustrates the observed and true probability density of input photon energies for the experiment from which FIGS. 9-13 were derived. The black trace plots the true probability density. The red trace plots the density expected to be observed when three photons on average arrive during a given interval length. The blue trace plots the actual observed density. Up to tenth-order pile-up can be seen in the observed density. FIG. 10 included several plots arising from a typical spectroscopic system. The actual incident photon density (‘Ideal Density’) is plotted with a solid dark line. An observed histogram obtained by partitioning the time-series data is shown in dark blue. Distortion of the spectrum caused by pulse pile-up is evident FIG. 13 plots various internal quantities using a logarithmic vertical axis. The dark blue curve that dips in the centre of the plot is |{circumflex over (ϕ)}.sub.X|. The green quantity that crosses the plot horizontally is |{circumflex over (ϕ)}.sub.Y|. The upper cyan curve that dips in the centre of the plot is |ϕ.sub.Z|.1 FIG. 11 illustrates the trajectory of the curve γ in the complex plane. FIG. 12 illustrates internal quantities similar to FIG. 9, however there are some additional signals. The horizontal red trace that is largely noise, and the corresponding black dashed line represent |ϕ.sub.ϵ|, the magnitude of the characteristic function of the histogram noise. The transparent green plot that forms the ‘noisy peak’ in the center of the figure is the estimate |ϕ.sub.ϵ|. This quantity was plotted in blue in FIG. 9, and was barely visible as it was obscured by |{circumflex over (ϕ)}.sub.X|/(λϕ.sub.Ze.sup.−λ), which is not shown in FIG. 12. The horizontal trace with an average value of −3 is a plot of |ϕ.sub.Y|. The cyan trace that begins with a value of zero at bin zero, and dips to a minimum around bin 8000 is |ϕ.sub.Z|, the magnitude of the characteristic function of the additive Gaussian noise. FIG. 13 relates to the 2nd broad aspect of calculating the count rate. It illustrates internal quantities used in the calculation of {circumflex over (λ)}. The cyan trace that begins with a value of zero at bin zero, and dips to a minimum around bin 8000 is |ϕ.sub.Z|, the magnitude of the characteristic function of the additive Gaussian noise. The dark blue trace that dips to a minimum in the center of the Figure is |{circumflex over (ϕ)}.sub.X|, the estimate of the characteristic function of the observed data. The yellow/green horizontal trace with an average value of −3 is the estimate of |ϕ.sub.Y|.

    REFERENCES

    [0125] [1] G. F. Knoll, Radiation Detection and Measurement, 3rd Edition. New York: Wiley, 2000. [0126] [2] P. A. B. Scoullar and R. J. Evans, “Maximum likelihood estimation techniques for high rate, high throughput digital pulse processing,” in 2008 IEEE Nuclear Science Symposium Conference Record, pp. 1668-1672, October 2008. [0127] [3] M. Haselman, J. Pasko, S. Hauck, T. Lewellen, and R. Miyaoka, “FPGA-based pulse pile-up correction with energy and timing recovery,” IEEE Transactions on Nuclear Science, vol. 59, pp. 1823-1830, October 2012. [0128] [4] T. Petrovic, M. Vencelj, M. Lipoglavsek, R. Novak, and D. Savran, “Efficient reduction of piled-up events in gamma-ray spectrometry at high count rates,” IEEE Transactions on Nuclear Science, vol. 61, pp. 584-589, February 2014. [0129] [5] B. A. VanDevender, M. P. Dion, J. E. Fast, D. C. Rodriguez, M. S. Taubman, C. D. Wilen, L. S. Wood, and M. E. Wright, “High-purity germanium spectroscopy at rates in excess of 10.sup.6 events/s,” IEEE Transactions on Nuclear Science, vol. 61, pp. 2619-2627, October 2014. [0130] [6] Y. Sepulcre, T. Trigano, and Y. Ritov, “Sparse regression algorithm for activity estimation in spectrometry,” IEEE Transactions on Signal Processing, vol. 61, pp. 4347-4359, September 2013. [0131] [7] T. Trigano, I. Gildin, and Y. Sepulcre, “Pileup correction algorithm using an iterated sparse reconstruction method,” IEEE Signal Processing Letters, vol. 22, pp. 1392-1395, September 2015. [0132] [8] L. Wielopolski and R. P. Gardner, “Prediction of the pulse-height spectral distortion caused by the peak pile-up effect,” Nuclear Instruments and Methods, vol. 133, pp. 303-309, March 1976. [0133] [9] N. P. Barradas and M. A. Reis, “Accurate calculation of pileup effects in PIXE spectra from first principles,” X-Ray Spectrometry, vol. 35, pp. 232-237, July 2006. [0134] [10] T. Trigano, A. Souloumiac, T. Montagu, F. Roueff, and E. Moulines, “Statistical pileup correction method for HPGe detectors,” IEEE Transactions on Signal Processing, vol. 55, pp. 4871-4881, October 2007. [0135] [11] T. Trigano, E. Barat, T. Dautremer, and T. Montagu, “Fast digital filtering of spectrometric data for pile-up correction,” IEEE Signal Processing Letters, vol. 22, pp. 973-977, July 2015. [0136] [12] P. Ilhe, E. Moulines, F. Roueff, and A. Souloumiac, “Nonparametric estimation of mark's distribution of an exponential shot-noise process,” Electronic Journal of Statistics, vol. 9, no. 2, pp. 3098-3123, 2015. [0137] [13] P. Ilhe, F. Roueff, E. Moulines, and A. Souloumiac, “Nonparametric estimation of a shot-noise process,” in 2016 IEEE Statistical Signal Processing Workshop (SSP), pp. 1-4, June 2016. [0138] [14] C. McLean, M. Pauley, and J. H. Manton, “Limitations of decision based pile-up correction algorithms,” in 2018 IEEE Statistical Signal Processing Workshop (SSP), pp. 693-697, June 2018. [0139] [15] D. Snyder and M. Miller, Random Point Processes In Time And Space. New York: Springer-Verlag, 2, revised ed., September 2011. [0140] [16] B. Buchmann and R. Grübel, “Decompounding: An estimation problem for Poisson random sums,” Annals of Statistics, pp. 1054-1074, 2003. [0141] [17] B. van Es, S. Gugushvili, and P. Spreij, “Deconvolution for an atomic distribution,” Electronic Journal of Statistics, vol. 2, pp. 265-297, 2008. [0142] [18] S. Gugushvili, Non-Parametric Inference for Partially Observed Levy Processes. PhD, University of Amsterdam, Thomas Stieltjes Institute, 2008. [0143] [19] S. Said, C. Lageman, N. Le Bihan, and J. H. Manton, “Decompounding on compact Lie groups,” IEEE Transactions on Information Theory, vol. 56, pp. 2766-2777, June 2010. [0144] [20] B. van Es, S. Gugushvili, and P. Spreij, “A kernel type nonparametric density estimator for decompounding,” Bernoulli, vol. 13, pp. 672-694, August 2007. [0145] [21] J. Yu, “Empirical characteristic function estimation and its applications,” Econometric Reviews, vol. 23, pp. 93-123, December 2004.