Method for tracking the position of an irradiating source
11353599 · 2022-06-07
Assignee
Inventors
Cpc classification
G01T1/167
PHYSICS
G01T1/1603
PHYSICS
International classification
G01T1/167
PHYSICS
Abstract
Method for producing a reconstruction image, the reconstruction image showing a position of irradiating sources in an environment, the reconstruction image being established on the basis of gamma images acquired by a gamma camera, which is sensitive to ionizing electromagnetic radiation, and movable relative to at least one irradiating source between two different measurement times, the gamma camera being joined to a visible camera, which is configured to form a visible image of the environment, the gamma camera and the visible camera defining an observation field, the method comprising establishing a reconstruction image, showing a position of at least one irradiation source in the observation field, the gamma camera and the visible camera being moved between at least two measurement times.
Claims
1. A method for forming a reconstruction image, the reconstruction image showing a position of irradiating sources in an observation field, the method using a device comprising a gamma camera joined to a visible camera, wherein: the gamma camera and the visible camera define the observation field; the visible camera is configured to form a visible image of the observation field; the gamma camera comprises pixels, which are configured to detect ionizing electromagnetic radiation generated by an irradiating source potentially present in the observation field, the pixels lying in a detection plane; the gamma camera is configured to form a gamma image, allowing a position, in the field of observation, of irradiating sources, the radiation of which is detected by the pixels, to be estimated; the method comprising the following iterative steps: a) at an initial measurement time, acquiring an initial gamma image with the gamma camera and acquiring an initial visible image with the visible camera; b) on the basis of the initial gamma image and of the initial visible image, initializing the reconstruction image; c) acquiring a gamma image with the gamma camera and acquiring a visible image with the visible camera, at a measurement time; d) comparing the visible image at the measurement time with a visible image at a prior time, the prior time being the initial measurement time or a preceding measurement time, then, depending on the comparison, estimating a field of movement between the visible images acquired at the prior time and the measurement time, respectively; e) updating the reconstruction image, using: a reconstruction image established at the prior time; the movement field resulting from step d); and the gamma image acquired at the measurement time; f) optionally superposing the reconstruction image, or a portion of the reconstruction image, and the visible image; g) reiterating steps c) to f) while incrementing the measurement time, until the iterations are stopped; wherein the method includes moving the device between at least two measurement times, such that one observation field corresponds to each measurement time.
2. The method according to claim 1, wherein at each measurement time, each pixel having detected radiation generated by an irradiating source is associated with a position on an object surface lying facing the detection plane; the reconstruction image corresponds to a spatial distribution of the irradiating sources inside the observation field at the measurement time over the object surface.
3. The method according to claim 2, wherein the object surface is an object plane lying parallel to the detection plane, each position within the object surface being coplanar.
4. The method according to claim 1, wherein the reconstruction image, at the measurement time, is defined only in an intersection between the observation field at the measurement time and the object surface.
5. The method according to claim 1, wherein at each measurement time, step e) comprises i) estimating a reconstruction image, at the measurement time, using the reconstruction image established at the prior time and of the movement field obtained in step d); ii) estimating the gamma image at the measurement time using the estimated reconstruction image; iii) comparing the gamma image estimated at the measurement time and the gamma image acquired at the measurement time; iv) depending on the comparison, updating the reconstruction image at the measurement time.
6. The method according to claim 1, comprising determining a registration function representative of a spatial offset between the gamma camera and the visible camera, the registration function being used in step e).
7. The method according to claim 1, wherein step d) comprises selecting noteworthy points in the visible images acquired at the prior time and at the measurement time, respectively; so that the movement field comprises two-dimensional movements of the selected noteworthy points respectively.
8. The method according to claim 7, wherein the noteworthy points are considered to belong to the same surface.
9. The method according to claim 1, wherein step d) comprises: extracting first noteworthy points from the visible image acquired at the measurement time; extracting second noteworthy points from the visible image acquired at the prior time; matching first noteworthy points and second noteworthy points, so as to form pairs of matched points, each pair being formed from a first noteworthy point and from a second noteworthy point; for each pair of matched points, determining a movement; obtaining the movement field on the basis of the movements determined for each pair of matched points.
10. The method according to claim 9, comprising: generating a first mesh of the observation field, at the measurement time, using first noteworthy points, the latter forming first vertices of the first mesh; generating a second mesh of the observation field, at the prior time, using second noteworthy points, the latter forming second vertices of the second mesh; depending on the movements determined for various pairs of noteworthy points, determining, by interpolation, movements at points belonging to the first mesh and to the second mesh; obtaining the movement field on the basis of the movement of each matched vertex and of the interpolated movements.
11. The method according to claim 1, wherein: the device is coupled to a movement sensor, so as to estimate a movement of the device between two successive times; step d) takes into account the estimated movement to estimate or validate the movement field.
12. The method according to claim 1, wherein the gamma camera is configured to simultaneously acquire, in steps a) and c), gamma images in various energy bands, steps b) and d) to f) being implemented for each energy band respectively, so as to obtain, at each measurement time, a reconstruction image in various energy bands.
13. The method according to claim 1, wherein the gamma camera is configured to simultaneously acquire, in steps a) and c), gamma images in various energy bands, the method comprising a linear combination of various gamma images respectively acquired, at the same measurement time, in various energy bands, so as to obtain a combined gamma image, steps b) and d) to f) being carried out on the basis of a combined gamma image formed at each measurement time.
14. The method according to claim 13, wherein a combined image is formed by a weighted sum of gamma images acquired at various energies, the weighted sum using weighting factors, the weighting factors being determined on the basis of an emission spectrum of a radioactive isotope, such that each reconstruction image is representative of a spatial distribution of the activity of the isotope in the observation field corresponding to the measurement time.
15. A device, comprising: a gamma camera, comprising pixels, defining a gamma observation field, the gamma camera being configured to determine a position of sources of X-ray or gamma irradiation in the gamma observation field; a visible camera, allowing a visible image of a visible observation field to be formed; the intersection of the gamma observation field and of the visible observation field being nonzero, and defining an observation field of the device; a processing unit, configured to receive, at various measurement times: a gamma image, formed by the gamma camera; a visible image formed by the visible camera; wherein the processing unit is configured to carry out steps b) and d) to g) of the method according to claim 1.
Description
FIGURES
(1)
(2)
(3)
(4)
(5)
(6)
DESCRIPTION OF PARTICULAR EMBODIMENTS
(7)
(8) The gamma imager may be a Compton gamma camera, a pinhole-collimator gamma camera or coded-aperture gamma camera. It may also be a question, non-exhaustively, of a gamma camera the collimator of which comprises parallel channels, or convergent channels, or divergent channels. Thus, the term gamma camera corresponds to an imager having an observation field and configured to form an image allowing irradiation sources to be located in the irradiation field. Whatever the type of gamma imager, it allows a gamma image comprising pixels, each pixel corresponding to one elementary spatial region of the observed environment, to be formed. Certain gamma imagers have a spectrometric function, in the sense that they allow radiation detected in various spectral bands to be spectrally separated. When this type of imager is used, it is possible to form various gamma images of a given observation field, each image corresponding to one spectral band. The image may likewise be established by considering a combination of spectral bands, which correspond to the emission spectrum of an isotope. The combination may be a weighted sum. The image is then representative of a spatial distribution of the activity of the isotope in question.
(9) With certain gamma imagers, in particular Compton Gamma cameras or coded-aperture gamma cameras, the image acquired by the imager does not allow the emitting sources in the observation field to be viewed directly. The acquired image undergoes processing, taking into account a response function of the camera, so as to allow a gamma image in which the brightness of each pixel corresponds to the emission intensity of that elementary spatial region of the observation field which is associated with the pixel to be obtained.
(10) It is conventional for gamma cameras to be associated with visible cameras. These are standard cameras, allowing a visible image of the observation field to be formed. The device shown in
(11) The visible camera 3 is usually placed at a small distance from the gamma camera, such that their respective optical axes are close together, and preferably parallel to each other. Thus, beyond a certain distance, generally smaller than 1 m, or even than 50 cm, the observation field of the gamma camera is included in the observation field of the visible camera. The intersection of the two observation fields forms the observation field Ω of the device.
(12) Preferably, the optical axes of the gamma camera 2 and of the visible camera 3 are aligned. The gamma camera and the visible camera are calibrated to correct a parallax error due to the offset of the two optical axes, and to take into account geometric distortions due to the lenses of the visible camera, in particular at the field edge. The calibration also takes into account a difference in the number and size of pixels between the visible camera and gamma camera. The calibration allows a registration function g to be defined.
(13) A processing unit 4 receives the images generated by the gamma camera 2 and the visible camera 3. The image-processing unit is notably configured to merge the images obtained from the visible camera 3 and the images obtained from the gamma camera 2, notably by taking into account the registration function. The objective is to obtain a composite image corresponding to the visible image, but on which the irradiating sources detected by the gamma camera appear. The processing unit 4 is connected to a memory in which image-processing instructions are coded. The processing unit 4 comprises a processor, a microprocessor for example, configured to implement instructions corresponding to the steps described below.
(14) As mentioned in relation to the prior art, the acquisition time of a gamma image is generally several seconds, or even several tens of seconds. This is due to the low collection efficiency combined with the low detection efficiency.
(15)
(16)
(17)
(18) A visible image acquired by the visible camera is associated with each acquisition of a gamma image. The visible camera 3 acquires a visible image of the observed scene. The observation field Ω of the imaging device, i.e. at the intersection of the respective observation fields of the gamma camera and of the visible camera, defines an object space. The object space contains points (x, y), each point being associated with one pixel (u, v) of the gamma image. The pixels belong to a detection plane P, in which the gamma image is formed. The points (x, y) of the observation field have two-dimensional coordinates, and correspond to pixels of the visible image, after the registration function has been taken into account.
(19) An important element of the invention is that the points of the object frame of reference are considered to belong to the same object projection plane P.sub.o. The angular observation field Ω.sub.2 of the gamma camera, which extends about the optical axis Δ.sub.2, describes a segment of a sphere S (see
(20) After image processing, allowing an image merger to be performed (thresholding and/or superposition with one image visible through the other), and taking into account the registration function, the irradiation levels associated with the points in the object frame of reference, and that are considered to be significant, are superposed on the visible image V.sub.k in the form of a colour code.
(21) Thus, the observation field Ω of the device bounds a portion of the object plane, in which portion the reconstruction is performed.
(22) The correspondence between each point (x, y) of the object plane P.sub.O and each pixel (u, v) of the gamma image depends on a spatial response function of the gamma camera. When gamma photons are emitted from a point (x, y) of the object frame of reference, toward the gamma camera, the trace that they form in the gamma image results from a spatial response function F of the camera. When a gamma camera based on a pinhole collimator is employed, the spatial response function takes into account the aperture of the pinhole. It may be approximated by an image of an irradiating source centred in the field of observation, which is used by way of impulse response. When a coded-aperture gamma camera is employed, the response function takes into account the geometry of the mask. When a Compton gamma camera is employed, the response function depends on the detected energy and on the position of pixels having detected radiation, in the detection plane P. Thus, in a Compton gamma camera, the response function may vary at each measurement time.
(23) The correspondence between a point in the observation field and a pixel of the gamma image may for example be determined via a convolution product of the gamma image and the response function (notably in the case of a gamma camera equipped with a pinhole collimator), or via a rectilinear projection (notably in the case of a gamma camera equipped with a coded-aperture collimator). The projection may also be of another type, for example, and nonlimitingly, stereographic or orthographic.
(24) Thus, the response function F makes it possible to pass from the detection plane, in which the gamma image is acquired, to the object plane P.sub.O, which corresponds to the observed scene, and in which the position of irradiating sources is sought. In the rest of the description, the notation F corresponds to a projection of an image formed in the object plane toward the detection plane. The notation F.sup.− corresponds to a retroprojection of an image formed in the detection plane P toward the object plane P.sub.o. In a first approach, F is a linear operator and F.sup.− corresponds to a transpose of F.
(25) If M.sub.k is a gamma image acquired at a time t.sub.k, and I.sub.k is an image showing a reconstruction of the irradiation sources in the field Ω.sub.k observed at the time t.sub.k, the following is obtained:
I.sub.k(x, y)=F.sup.−[M.sub.k(u, v)] (1)
where: (x, y) are coordinates, in the object plane, parallel to the detection plane, and corresponding to coordinates (u, v) in the detection plane; I.sub.k(x, y) corresponds to an image of the distribution of irradiating sources in the observation field. It is a question of a two-dimensional matrix each term of which represents an emission rate, in a spectral band, or in a plurality of combined spectral bands, of an irradiating source in the object plane, at coordinates (x, y), at the measurement time t.sub.k. As described above, when there are a plurality of spectral bands in the emission spectrum of an isotope, the image I.sub.k is representative of a spatial distribution of the activity of the isotope. Generally, I.sub.k corresponds to an estimation of the distribution of the irradiation after at least one gamma image acquired by the gamma camera has been taken into account. Each term of the image I.sub.k(x, y) corresponds to an estimation of an emission rate of gamma (or X-ray) photons, or of an activity, at point (x, y) in the object plane. The size of the image I.sub.k is Nx, Ny, where Nx and Ny designate the number of pixels of the gamma camera along a horizontal axis and a vertical axis, respectively. The size of the matrix I.sub.k is identical to that of a gamma image acquired by the gamma camera.
(26) F.sup.− is a retroprojection operator, allowing passage to the object plane P.sub.o from the detection plane P.
(27) Representing the position of the irradiating sources in a two-dimensional spatial distribution (or map) allows the reconstruction image to be superposed on a visible image acquired by the visible camera, simply.
(28) This method, the main steps of which are schematically shown in
(29) Step 100: Initialization
(30) For a first acquisition, the measuring device 1 is placed at a first position, at a first time t.sub.1. A gamma image M.sub.1 is acquired. Generally, the acquisition time of each gamma image is comprised between 1 ms and 5 s, and preferably comprised between 50 ms and 500 ms, and is for example 100 ms. In the gamma image, each irradiating source present in the field takes the form of a trace, the brightness of which depends on the irradiation, generated by the irradiating source, and detected by the gamma camera.
(31) When the gamma camera is able to perform a spectrometric function, a gamma image is acquired in a given energy spectral band ΔE, or in a combination of energy bands, as described above.
(32) A visible image V.sub.1 is acquired at the first time t.sub.1, or at a time considered to be consecutive to the first time t.sub.1. The main objectives of the initial visible image V.sub.1 is to obtain noteworthy points in the observed visible scene. The noteworthy points, and the use made thereof, are described in more detail in relation to step 130.
(33) The gamma image M.sub.1 is divided by an estimation image {circumflex over (M)}.sub.1. The estimation image {circumflex over (M)}.sub.1 corresponds to an estimation of the gamma image M.sub.1. For the first acquisition, the estimation image {circumflex over (M)}.sub.1 is a predetermined image. It is for example uniform and made up of 1.
(34) The ratio
(35)
corresponds to an error term in the measurement with respect to the estimation. The ratio U.sub.1 is computed term-by-term, for each pixel (u, v) of the gamma image M.sub.1 and of its estimate {circumflex over (M)}.sub.1.
(36) Step 110: Back-Propagation of the Error and Update of the Reconstruction
(37) The error term U.sub.1 is propagated to the object plane, so as to update a reconstruction image I.sub.1 such that:
W.sub.1(x, y)=F.sup.−[U.sub.1(u, v)] (2)
and
I.sub.1(x, y)=I.sub.0(x, y)×W.sub.1(x, y) (3)
(38) The image I.sub.0(x, y) is a predetermined initialization image that for example contains only real positive numbers, only 1s for example.
(39) Following steps 100 to 110, steps 120 to 190 are carried out iteratively. To each iteration corresponds an iteration rank k. k is an integer comprised between 2 and K. K corresponds to the iteration rank when the iterations are stopped.
(40) Step 120: Acquisition of a Gamma Image M.sub.k and of a Visible Image V.sub.k at a Time t.sub.k.
(41) A visible image V.sub.k is acquired at a time t.sub.k. A gamma image M.sub.k is also acquired at the time t.sub.k, or at an acquisition time considered to be simultaneous with the time t.sub.k. Preferably, the acquisition times M.sub.k of each gamma image are identical in each iteration k.
(42) Step 130: Estimation of a Movement Field D.sub.k
(43) The objective of this step is to form a movement field D.sub.k representative of a two-dimensional movement of the visible image V.sub.k−1 with respect to the visible image V.sub.k. The image V.sub.k−1 is a visible image acquired in a preceding iteration or, when k=2, in step 100. The movement field D.sub.k comprises, at various coordinates (x, y) in the object plane, a movement vector d.sub.k, corresponding to a movement along the X-axis and a movement along the Y-axis. Each movement vector is a vector of size equal to 2. Thus, only a two-dimensional movement, in a plane parallel to the object plane Po, or coincident with the latter, is taken into account.
(44) The device 1 may have been moved or reoriented between the acquisitions of the images V.sub.k−1 and V.sub.k. Alternatively, certain elements of the object space may have been moved between the two acquisitions. This is notably the case when an irradiating source moves in the object frame of reference.
(45) In each image V.sub.k−1 and V.sub.k, noteworthy points are identified. The noteworthy points are points that are easily identifiable via conventional image processing. It is for example a question of points forming an outline or edge of an object, or of points that are of particularly high contrast from the point of view of brightness or of colour. Thus, a noteworthy point may be a point corresponding to a high Laplacian or brightness gradient.
(46) The number of noteworthy points detected in each image V.sub.k−1 and V.sub.k is preferably comprised between a few tens and a few hundred or even is more than 1000. The noteworthy points detected in the two images V.sub.k−1 and V.sub.k form a set E.sub.k−1 and a set E.sub.k, respectively.
(47) The noteworthy points in an image may be detected by implementing an algorithm, for example a Harris corner detector. Following their detection, the noteworthy points are characterized, so as to allow their potential identification in the two images V.sub.k−1 and V.sub.k. The characterization aims to characterize each noteworthy point and its environment. This may be achieved using feature-description algorithms known to those skilled in the art, such as DAISY or LUCID or FREAK. With each noteworthy point is associated a descriptor vector, allowing it to be recognized in the two images V.sub.k−1 and V.sub.k.
(48) On the basis of their description and of their characterization, the noteworthy points identified in the images V.sub.k−1 and V.sub.k are matched, so as to establish pairs of noteworthy points, each pair associating a noteworthy point of the image V.sub.k−1 with a noteworthy point of the image V.sub.k, the matched noteworthy points having descriptor vectors that are considered to be identical. The matched noteworthy points correspond to the same point of the observed scene, this point appearing in both visible images V.sub.k−1 and V.sub.k.
(49) Preferably, the noteworthy points of each image are considered to be coplanar: they lie in a plane parallel to the object plane.
(50) On the basis of the matched noteworthy points, the images V.sub.k−1 and V.sub.k are meshed, on the basis of the sets E.sub.k−1 and E.sub.k, respectively. Delaunay mesh generation may be used, this type of mesh generation defining triangular mesh cells, each mesh cell lying between three noteworthy points that are different from one another and matched in the two images V.sub.k−1 and V.sub.k. For each vertex of the mesh, present in both images V.sub.k−1 and V.sub.k, a two-dimensional movement d.sub.k is estimated. For an optimal implementation of the invention, it is preferable for the movement of the device, or of the elements forming the object frame of reference, to be relatively slow, so as to maximize the number of vertices of the mesh present in both images V.sub.k−1 and V.sub.k. A sufficiently slow and fluid movement and a sufficient light level also prevents blurring of the visible image. Preferably, the observation fields before and after a movement overlap by at least 50%, or even at least 80%, or 90%.
(51) An estimation of a movement vector d.sub.k is thus obtained for each vertex of the mesh present in both images V.sub.k−1 and V.sub.k. Since the movement vector is defined only at the vertices present in the images V.sub.k−1 and V.sub.k, an interpolation, for example a linear interpolation, is performed, so as to obtain an estimation of a movement d.sub.k at each point (x, y) of the observation field Ω.sub.k of the device, at least within the mesh. More precisely, the movement is determined, at least in a mesh established in the intersection of the observation fields Ω.sub.k−1 and Ω.sub.k corresponding to the visible images V.sub.k−1 and V.sub.k acquired at the measurement times k−1 and k, respectively. The movement field D.sub.k is then formed, each term of which corresponds to the movement vector d.sub.k(x, y) determined at least at each point (x, y) included in the mesh.
(52) The movement field D.sub.k, outside the mesh, may be estimated by extrapolation, on the basis of the movement vectors d.sub.k(x, y) established inside the mesh. This allows a movement field vector D.sub.k to be obtained for the entire observation field of the image. In one embodiment, the device comprises a movement sensor 5, allowing an angular movement of the device between the acquisitions of the visible images V.sub.k−1 and V.sub.k to be estimated. The movement sensor may be a magneto-inertial measurement unit, or comprise at least one gyrometer. In this case, the movement field, outside the mesh, may be estimated by combining the movement field obtained inside the mesh with the movement of the device between the two images.
(53) Other methods allow a movement field D.sub.k between two successive images to be estimated. It may for example be a question of optical-flow methods, which allow a small movement between two successive images to be estimated.
(54) The movement field D.sub.k is at least partially determined by observing a movement of objects present in the visible images V.sub.k−1 and V.sub.k. According to a first possibility, the objects remain stationary in the observed scene: the movement field is established by observing a movement of objects, or portions of objects, present in both images V.sub.k−1 and V.sub.k. In this case, the movement field corresponds to the movement of the camera with respect to the objects present in the observation field of said camera. According to a second possibility, the objects move even though the device has not been moved between the acquisition of the images V.sub.k−1 and V.sub.k. In this case, the movement field D.sub.k corresponds to the movement of the objects relative to the visible camera. More generally, the movement field D.sub.k corresponds to the relative movement of objects, present in the observation field of the visible camera, relative to the latter.
(55) When it is not possible to determine the movement field outside of the mesh, by extrapolation or using movement measurements, the movement field outside the mesh is considered to be uniform, and of constant value—a zero vector may for example be used.
(56) At the end of this step, a movement field D.sub.k of (2, Nx, Ny) size is obtained. At each point (x, y) in question, a field D.sub.k.sup.(X) of movement along the X-axis, of (Nx, Ny) size, and a field D.sub.k.sup.(Y) of movement along the Y-axis, of (Nx, Ny) size, are established.
(57) Step 140: Estimation of the Reconstruction Image Î.sub.k
(58) On the basis of the reconstruction image I.sub.k−1(x, y) resulting from a preceding iteration, or resulting from the initialization, and knowing the movement field D.sub.k, a reconstruction image Î.sub.k is estimated, taking into account the registration function. This step is an important element of the invention.
(59) This step is based on the assumption that the movement of the position of the irradiating sources, in the reconstruction image I.sub.k, between the times k−1 and k, may be obtained by considering the movement field D.sub.k measured between two successive visible images associated with the times k−1 and k, respectively, the visible images V.sub.k−1 and V.sub.k being associated with the gamma images M.sub.k−1 and M.sub.k, respectively.
Thus: Î.sub.k=(B*I.sub.K−1).sup.1−α×I.sub.k−1.Math.g(D.sub.k).sup.α (4)
where: x is a term-by-term multiplication operator (Hadamard product); I.sub.k−1 corresponds to the reconstruction image resulting from a preceding iteration. It is a question of a matrix, the same size as a gamma image M.sub.k, and each point of which corresponds to an emission rate, in a given spectral band or in a combination of spectral bands, or to an activity of an isotope.
(60) B is a filter allowing the image I.sub.k−1 to be smoothed. It may be a low-pass filter, a Gaussian filter for example. The convolution of the image I.sub.k−1 and the smoothing filter B allows the reconstruction image I.sub.k−1 to be blurred. Recourse to such a filter allows an a priori on the position of the irradiating sources such as described by the reconstruction image I.sub.k−1 to be established. According to one alternative, the filtering may be performed according to the expression:
Î.sub.k=B*[I.sub.k−1.sup.1−α×I.sub.k−1.Math.g(D.sub.k).sup.α] (4′)
(61) g corresponds to the registration function; g(D.sub.k) is a matrix function taking into account movement in the object plane, while taking into account the registration function g of the visible and gamma cameras.
(62) I.sub.k−1.Math.g(D.sub.k) is the reconstruction image I.sub.k−1 after application of the registered movement vector.
(63) At each point (x, y) in the observation field, D.sub.k allows a movement D.sub.k.sup.(X)(x, y) along the X-axis and a movement D.sub.k.sup.(Y)(x, y) along the Y-axis to be defined. The matrix function g(D.sub.k) allows a movement, in the object plane, to be established depending on the movement D.sub.k. It allows the coordinates (x.sub.k, y.sub.k) at the time k to be estimated depending on coordinates (x.sub.k−1, y.sub.k−1) at the time k−1, such that:
x.sub.k=x.sub.k−1+g.sub.X[D.sub.k.sup.(X)(x, y)] (5.1)
y.sub.k=y.sub.k−1+g.sub.Y[D.sub.k.sup.(Y)(x, y)] (5.2)
where g.sub.X and g.sub.Y correspond to components of the registration function established along the X-axis and along the Y-axis, respectively.
(64) Thus, g(D.sub.k) is a matrix function allowing passage between (x.sub.k−1, y.sub.k−1) and (x.sub.k, y.sub.k), such as explained by expressions (5.1) and (5.2). It is a question of a change of variables.
(65) The exponent α is strictly positive and strictly lower than 1:0<α≤1. The exponent α allows a memory effect to be achieved: the closer α gets to 1, the more the memory of preceding gamma-image acquisitions is preserved.
(66) The movement field D.sub.k is established by observing a movement of objects between the visible images V.sub.k−1 and V.sub.k, in the observation field of the visible camera. Thus, the movement of the irradiating sources, in the two reconstruction images I.sub.k−1 and Î.sub.k is based on the detection of objects, forming noteworthy points in the visible images V.sub.k−1 and V.sub.k, matching thereof, and the computation of the respective movements thereof between the two visible images. The movement thereof allows a field D.sub.k of movement in the visible image to be established, which is assigned to the gamma image, after application of the registration function.
(67) Thus, to within the registration function, movement of the irradiation sources is considered to be able to be determined from the movement field computed on the basis of the visible images. An important point in the invention is that the movements of objects observed in the visible image are considered to be representative of the movements of the gamma sources, the latter being shown in the reconstruction image. This significantly limits the complexity of the computations and the resources required to perform the computations. The method does not require complex techniques, such as triangulation, to be implemented or a three-dimensional position of the irradiating sources in the environment to be estimated.
(68) Step 150: Estimation of the Gamma Image {circumflex over (M)}.sub.k at the Time k
(69) The estimated reconstruction image Î.sub.k, obtained in step 140, is projected into the detection plane, so as to estimate the gamma image {circumflex over (M)}.sub.k corresponding to iteration k. The gamma image {circumflex over (M)}.sub.k is estimated using the expression
{circumflex over (M)}.sub.k(u, v)=F[Î.sub.k(x, y)] (6)
where F corresponds to a projection operator describing the projection between the object plane P.sub.o and the detection plane P.
(70) Step 160: Determination of an Error Term
(71) In this step, on the basis of the image M.sub.k acquired at the time k, and resulting from step 120, and of the estimation {circumflex over (M)}.sub.k resulting from step 140, the measurement error is computed:
(72)
(73) Step 170: Back-Propagation of the Error
(74) The error term U.sub.k is back-propagated to the object plane using the expression:
W.sub.k(x, y)=F.sup.−[U.sub.k(u, v)] (8)
(75) Step 180: Update of the Reconstruction Image
(76) In this step, the reconstruction image, corresponding to the time k, is updated according to the expression,
I.sub.k=I.sub.k−1×W.sub.k (9)
in which x designates a term-by-term multiplication.
(77) According to one embodiment, a non-linear function h is taken into account, such that
I.sub.k=I.sub.k−1×h(W.sub.k) (10)
(78) Thus, at each measurement time k, the reconstruction image I.sub.k is updated by taking into account the gamma image M.sub.k acquired at the measurement time. The gamma image M.sub.k is taken into account via the back-propagated error W.sub.k.
(79) The reconstruction image I.sub.k is defined in an intersection between the object plane P.sub.O and the observation field Ω.sub.k at the measurement time k. Preferably, the reconstruction image I.sub.k is not defined outside of the observation field Ω.sub.k at the measurement time k. This allows the reconstruction process to be simplified, by not taking into account irradiation sources potentially present outside of the observation field Ω.sub.k. The reconstruction image then contains only the positions of the irradiating sources present in the observation field at the measurement time k, obtained on the basis of gamma images M.sub.k acquired at the measurement time or of gamma images acquired at prior times, preceding the measurement time. On each movement of the measuring device 1, the spatial extent of the reconstruction image is modified, so as to limit it to the intersection between the object plane P.sub.O and the observation field Ω.sub.k at the measurement time k.
(80) The reconstruction image may be of a size smaller than or equal to the size of each gamma image.
(81) Step 180: Superposition of the Images.
(82) The reconstruction image I.sub.k is given a certain level of transparency, then is superposed on the visible image V.sub.k, the superposition being carried out without taking into account the registration function. This allows a correspondence to be achieved between the objects observed in the visible image and the irradiating sources shown in the reconstruction image.
(83) Step 190:
(84) Reiteration of steps 130 to 190, while incrementing the iteration index k.
(85) According to one embodiment, the device comprises a movement sensor 5, for example an inertial measurement unit. The movement sensor comprises a gyrometer, optionally complemented by an accelerometer and/or a magnetometer. Generally, the movement sensor allows an angular movement between two successive acquisition times to be obtained. Information obtained from the inertial measurement unit may be taken into account in step 130, so as to limit the risk of error in the matching of the noteworthy points. It may also be used to complete the movement field D.sub.k outside of the mesh formed between the matched noteworthy points. The translational movement of the camera and the positioning of the objects may also be estimated using other devices such as LIDAR, GPS positioning systems, fixed radiofrequency beacons or an inertial navigation system.
(86) According to one embodiment, the gamma camera is able to perform a spectrometric function. Steps 100 to 180 are carried out simultaneously, in each energy band ΔE. It is then possible to obtain as many reconstruction images as there are considered energy bands.
(87) Steps 100 to 180 may also be carried out on the basis of gamma images that are respectively acquired in various energy bands, then combined, as described above, so as to take into account an emission spectrum of a radioactive isotope selected beforehand. In this case, the reconstruction image may be likened to a spatial distribution of the activity of the radioactive isotope, in the observation field.
(88) The invention is applicable to various nuclear installations, or, more generally, to operations of seeking for and characterizing radioactive sources.