MOTION COMPENSATION OF POSITRON EMISSION TOMOGRAPHIC DATA
20230008263 · 2023-01-12
Inventors
Cpc classification
G06T11/008
PHYSICS
G01T1/2985
PHYSICS
International classification
Abstract
A method for compensating motion in positron emission tomographic, PET, data comprising coincident lines of response from positron-emitting position markers, includes: detecting a slippage of one or more of the position markers; determining slippage correction parameters based on the detected slippage; and applying motion correction to the PET data by taking into account the slippage correction parameters, thereby obtaining a motion-compensated PET data.
Claims
1.-12. (canceled)
13. A method for compensating motion in positron emission tomographic, PET, data comprising coincident lines of response from positron-emitting position markers, the method comprising: detecting a slippage of one or more of the position markers; deriving a plurality of images from the PET data, and, identifying therefrom, based on the detected slippage, a first set of images before the detected slippage and a second set of images after the detected slippage; motion correcting the first and second set of images, thereby obtaining a first and a second motion-compensated set of images; and determining slippage correction parameters based on the first and second motion-compensated set of images; and applying motion correction to the PET data by taking into account the slippage correction parameters, thereby obtaining a motion-compensated PET data.
14. The method according to claim 13, wherein the identifying comprises determining locations of the position markers in the images and calculating therefrom inter-point distances between the position markers in the images.
15. The method according to claim 14, where the identifying further comprises clustering the calculated inter-point distances between the position markers in the images into a first set before slippage and a second set after slippage.
16. The method according to claim 15, wherein the clustering is based on a k-means algorithm or an expectation-maximization algorithm.
17. The method according to claim 13, further comprising: deriving motion correction parameters for the first and second set of images; applying the derived motion compensation parameters to the respective set of images, thereby obtaining a first and a second motion-compensated set of images from the respective first and second set of images.
18. The method according to claim 17, wherein the determining slippage correction parameters comprises masking the position markers in the first and second motion-compensated set of images and registering the first and second motion-compensated and masked set of images.
19. The method according to claim 18, further comprising correcting the motion correction parameters with the slippage correction parameters.
20. The method according to claim 13, further comprising correcting scatter in the motioned-compensated PET data.
21. The method according to claim 20, further comprising: masking the position markers in the motion-compensated PET data, thereby deriving motion-compensated and masked PET data; deriving a PET scatter sinogram from the motion-compensated and masked PET data and scaling the PET scatter sinogram, thereby obtaining scatter corrected and motion-compensated PET data.
22. A data processing system comprising means for carrying out the method according to claim 13.
23. A computer program product comprising computer-executable instructions for causing a system for correcting motion in positron emission tomographic, PET, data comprising coincident lines of response from positron-emitting position markers to perform the method according to claim 13.
24. A computer readable storage medium comprising the computer program product according to claim 23.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0045] Some example embodiments will now be described with reference to the accompanying drawings.
[0046]
[0047]
[0048]
[0049]
[0050]
[0051]
[0052]
[0053]
[0054]
[0055]
[0056]
[0057]
DETAILED DESCRIPTION OF EMBODIMENT(S)
[0058]
[0059] The memory may be a volatile memory, a non-volatile memory or a combination thereof. The memory may store program instructions, scan data generated by the PET imaging system 100 and any data as needed by the imaging system 100.
[0060] Algorithms to operate the coincidence processing module 102, the list-mode data acquisition module 104, the position marker tracking and correction module 106, the motion correction module 108, the scatter correction module 110, the imaging reconstruction module 112 and the image output module 114 are provided as software instructions stored in the memory 116. The processor 118 executes the instructions stored in the memory 116 and controls the operation of the PET imaging system 100, i.e. the operation of the coincidence processing module 102, the list-mode data acquisition module 104, the position marker tracking and correction module 106, the motion correction module 108, the scatter correction module 110, the imaging reconstruction module 112 and the image output module 114.
[0061] In some embodiments, the position marker tracking and correction module 106 and the motion correction module 108 may be provided internally or externally to the PET imaging system 100. Similarly, the scatter correction module 110 may be provided internally or externally to the PET imaging system 100. The image output module 114 may also be provided externally or internally to the PET imaging system 100. In such embodiments, the position tracking and correction module 106, the motion correction module 108, the scatter correction module 110 and the image output module 114 may include a separate memory and processor.
[0062] The method for compensating motion in PET data, according to an example embodiment of the present disclosure, will be explained below with reference to
[0063] External emitting sources 121-123, such as positron emitting sources, and herein referred as position markers, are placed on the object of interest, e.g. the patient's head, as shown in
[0064] The position marker tracking and correction module 106 uses the list-mode data from the list-mode acquisition module 104 to determine the coincident lines of response from the coincidence processing module 102 corresponding to the pairs of annihilation photons from the position markers 121-123. From the corresponding coincidence line of response, the tracking and correction module 106, tracks the position of the markers 121-123 in a three-dimensional space and records these locations throughout the course of the scan. This way, the location of the position markers in the PET data are detected (step 401) and then stored in the memory 116. Conventional image processing techniques suitable for detection of the location of the position markers 121-123 are well-known. For example, intensity-based image processing techniques can be applied to detect the local maxima points in the images. Due to presence of noise, low intensity local maxima points may also be detected. However, as the local maxima points corresponding to the position markers exhibit much higher intensity, the local maxima points due to noise can be easily removed by thresholding. The resulting image will then contain the local maxima points corresponding to the position markers.
[0065] The location of the position markers is represented by three coordinates along the x-, y- and z-axis. For example, a position marker located at position P1 has coordinates (x1, y1, z1), a position maker located at position P2 has coordinates (x2, y2, z2) and so on.
[0066] To derive the position of the position markers, the list-mode data needs to be converted into a volumetric image. The list-mode data is split into time frames with short duration, for example, a time frame can have a 32 ms duration, so that small movement could be identified. Therefore, the shorter the duration, the movements could be identified with a finer granularity. The shorter the duration, the higher the precision of the movement estimation and therefore the more complex subject's movement could be identified. For example, in case of sudden and/or complex movements time frames with a shorter duration are preferred.
[0067] The time frames are then reconstructed into volumetric images which together record the physiological and biochemical activity of the scanned subject over time. A complete reconstruction of the images is not required as these reconstructed images are used only for the detection of the position markers. Therefore, it suffices to perform an approximate reconstructed by applying a limited number of iterations of the list-mode expectation maximization algorithm. For example, it may suffice to reconstruct the image by applying 3 iterations. The reconstruction is therefore very fast.
[0068] For example, as shown in
[0069] After determining the locations of the position makers, the tracking and correction module 106, calculates the inter-point distances between the position markers in the images and clusters them into a first set before slippage and a second set after slippage.
[0070] To detect a slippage of one or more of the position markers, the distances among the position markers in the images are determined (step 402), i.e. the inter-point distances dl2i, dl3i and d23i where i denotes the image.
[0071] The determined inter-point distances are then clustered (step 403) into a set before slippage and a set after slippage.
[0072] The two clusters are represented as a cloud, which indicates that the position of the position markers within a cloud are changing due to the movement of the subject during the scanning. Depending on the movement of the subject, the clouds could be bigger (or broader) or smaller (or more compacted). Depending on the amount of slippage, the clusters may be position closer to each other or more distantly. If the same position marker slips twice during the scanning, three clusters will be identified. If, however, the same position marker slips twice, but in this case it slips from a first position to a second position and then slips again from the second position back to the first position, then only two clusters will be identified although two slippages of the same position marker were observed. Thus, one cluster containing the distances among the position markers in the images before the first slippage and after the second slippage, and, another cluster containing the distances among the position markers in the images after the first slippage and before the second slippage will be created and identified.
[0073] The clustering algorithm labels each image according to the similarities in the inter-point distances in all images (step 404). Any interpoint distances greater or equal to two times the spatial resolution of the PET scanner may be classified as a slippage. Therefore, the differentiation between slippage and movement is dependent of the spatial resolution of the PET scanner. Therefore, the inter-point distances in all images within a cluster remain approximately in a rigid configuration as there is no slippage present among the images within the cluster. By labelling the images, two sets of images are identified, i.e. one set before slippage and another set after slippage.
[0074] The motion correction module 108 then estimates the motion correction parameters for a respective set of images (i.e. a cluster of images), by performing a registration. One way to do that is by performing a rigid registration, such as a point set registration, which involves calculation of the singular value decomposition of the cross-covariance matrix between the position of the position markers of each image within a respective set (or a cluster) to a position of the position markers selected as a reference position. For example, as a reference position, the position of the position markers in the first image within the set could be selected. Since the position markers in the images within a respective set did not slip with respect to each other, the approximate registration would not suffer from position marker slippage artefacts. Therefore, the point set registration will derive the motion correction parameters for a respective cluster free from any artefacts due to slippage and motion-compensated images per set will be obtained (step 405).
[0075] The tracking and correction module 106, then rigidly registers the motion-compensated set of images to derive the slippage compensation parameters (step 407). To do so, however, the position markers need to be masked (step 406) in the motion-compensated images prior registration. In other words, the grey value at the respective location of the position marker in the image is set to a predefined grey value. The tracking and correction module 106 retrieves the locations of the position markers from the memory 116 and masks the position markers in the motion-compensated set of images, thereby obtaining motion-compensated and masked set of images. These set of images are then rigidly registered. The rigid transformation obtained therefrom corresponds to the positioning error due to the slippage and represents the slippage correction parameters.
[0076] In the example where the same position marker slips twice from a first position to a second and then back to the first position, the motion correction module 108 will perform two registrations thus estimating two sets of motion correction parameters, one set of motion correction parameters for the first set of images, i.e. the images before the first slippage and the images after the second slippage, and another set for the second set of images, i.e. the images after the first slippage and before the second slippage. Similarly, the tracking and correction module 106 will derive the slippage correction parameters by registering the two motion-compensation and masked set of images as described above. Thus, in such cases, the clustering allows to group the distances among the position markers in an optimal way thus limiting the required number of registrations, and in this case, to the number of registrations required for a single slippage.
[0077] Alternatively, the slippage correction parameters may be derived based on one or more images from a respective motion-compensated set of images. For example, the last image or the last few images from the first motion-compensated set of images may be rigidly registered to the first image or the first few images of the second motion-compensated set of images. Another way to derive the slippage correction parameters may be by rigidly registering a first mean image derived from the first motion-compensated set of images and a second mean image derived from the motion-compensated second set of images. Performing rigid registration based on the mean images is preferred as the mean images are less noisy and therefore the rigid registration is faster.
[0078] The tracking and correction module 106 then corrects the motion correction parameters with the slippage correction parameters (step 408) and stores the updated motion correction parameters in the memory 116. The updated motion correction parameters are then used by the list-mode data acquisition module 104 to correct the list-mode data for the movement of the subject. A motion-compensated PET data is therefore obtained (step 409).
[0079] The image reconstruction module 112 then reconstructs the motion-compensated PET data thereby creating a reconstructed image which is outputted by the image output module 114.
[0080]
[0081] Typically, prior image reconstruction, a scatter correction is performed by a scatter correction module 110. Scatter correction is done by estimating the occurrence of scatter events based on a simulation. The result is a scatter sinogram indicating the position and relative intensity of the scatter events. Sinograms are histograms in which each bin corresponds to a different line of response geometry, i.e. each having a different orientation and location. The intensity of the scatter sinogram is then scaled so that it corresponds to the intensity of the PET data. This is done by fitting the tails of the scatter sinogram to the PET sinogram.
[0082] Position markers used for tracking the motion of the subject are however seen as high intensity regions in the PET sinogram. This causes the PET sinogram to present a high intensity peak at the borders as shown in
[0083] To perform a correct scatter correction, is therefore needed to remove the intensity peak due to the position markers. The scatter correction module 110 according to an embodiment of the present disclosure is arranged to retrieve the locations of the position markers from the memory 116.
[0084]
[0085] The top image in
[0086] The bottom image in
[0087] Although the present invention has been illustrated by reference to specific embodiments, it will be apparent to those skilled in the art that the invention is not limited to the details of the foregoing illustrative embodiments, and that the present invention may be embodied with various changes and modifications without departing from the scope thereof. The present embodiments are therefore to be considered in all respects as illustrative and not restrictive, the scope of the invention being indicated by the appended claims rather than by the foregoing description, and all changes which come within the scope of the claims are therefore intended to be embraced therein.
[0088] It will furthermore be understood by the reader of this patent application that the words “comprising” or “comprise” do not exclude other elements or steps, that the words “a” or “an” do not exclude a plurality, and that a single element, such as a computer system, a processor, or another integrated unit may fulfil the functions of several means recited in the claims. Any reference signs in the claims shall not be construed as limiting the respective claims concerned. The terms “first”, “second”, third“, “a”, “b”, “c”, and the like, when used in the description or in the claims are introduced to distinguish between similar elements or steps and are not necessarily describing a sequential or chronological order. Similarly, the terms “top”, “bottom”, “over”, “under”, and the like are introduced for descriptive purposes and not necessarily to denote relative positions. It is to be understood that the terms so used are interchangeable under appropriate circumstances and embodiments of the invention are capable of operating according to the present invention in other sequences, or in orientations different from the one(s) described or illustrated above.