Detection of the position of a moving object and treatment method

09730654 · 2017-08-15

Assignee

Inventors

Cpc classification

International classification

Abstract

The invention relates to a method for determining the position of an object moving within a body, wherein the body is connected to markers, a movement signal is determined based on the measured movement of the markers, images are taken from the object using a camera or detector, wherein the camera or detector is moved with respect to the object, it is determined from which direction or range of angles or segment the most images corresponding to a predefined cycle of the movement signal are taken, and using at least some or all of the images of the segment containing the most images for a specified movement cycle, an image of the object is reconstructed.

Claims

1. A non-transitory computer-readable storage medium on which a program is stored which, when running on a computer of an associated medical treatment apparatus, causes the computer of the associated medical treatment apparatus to perform a method for determining a treatment plan to be used for selective delivery of a therapeutic treatment beam to an anatomical body part moving in a body of an associated patient due to a breathing motion of the associated patient, the method comprising: operating the associated medical treatment apparatus at a first imaging frequency to obtain a first dataset describing a first sequence of images describing the anatomical body part of the associated patient which have been taken at different first times during the moving of the body part due to the breathing motion of the associated patient at the different first times, wherein the first imaging frequency defines a first cycling rate of an image acquisition beam directed at the anatomical body part during the breathing motion of the associated patient at the different first times, wherein the first sequence of images comprises a first plurality of segment bins of image sequence portions of the first sequence of images; determining, for each segment bin of image sequence portions of the first plurality of segment bins, a different respiratory state of the breathing motion of the associated patient at the different first times; operating the associated medical treatment apparatus at a second imaging frequency to obtain a second dataset describing a second sequence of images describing the anatomical body part of the associated patient which have been taken at different second times during a moving of the body part due to the breathing motion of the associated patient at the different second times, wherein the second imaging frequency defines a second cycling rate of the image acquisition beam directed at the anatomical body part during the breathing motion of the associated patient at the different second times, wherein the second sequence of images comprises a second plurality of segment bins of image sequence portions of the second sequence of images, each segment bin of image sequence portions of the second plurality of segment bins corresponding to a different unknown respiratory state of the breathing motion of the associated patient at the different second times, wherein the second imaging frequency is an integer multiple of the first imaging frequency; determining, by the processor, a match between two or more of the first plurality of segment bins of image sequence portions described by the first dataset and two or more of the second plurality of segment bins of image sequence portions described by the second dataset; assigning the determined respiratory state of the two or more of the first plurality of segment bins to the two or more of the second plurality of segment bins matching the two or more of the first plurality of segment bins; determining, by the processor, the treatment plan to be used for selective delivery of the therapeutic treatment beam to the anatomical body part during the second times in accordance with the assigned determined respiratory states; and generating, by the processor, an output signal representative of the determined treatment plan, the output signal being operative to control an associated source of the therapeutic treatment beam for the selective delivery of the therapeutic treatment beam to the anatomical body part of the associated patient during the different second times based on the treatment plan.

2. The storage medium according to claim 1, wherein the second dataset is shifted with respect to the first dataset in time to determine a correlation or matching value as the match.

3. The storage medium according to claim 1, wherein the first dataset is a 4D computer tomography (4D CT) dataset.

4. The storage medium according to claim 3, wherein a digital reconstructed radiograph (DRR) is reconstructed from each three-dimensional dataset of the 4D CT dataset.

5. The storage medium according to claim 1, wherein, when running on the computer, the program stored on the storage medium causes the computer to perform a further step comprising: assigning each of the plurality of segment bins of images to a different state of the breathing motion of the associated patient.

6. The storage medium according to claim 1, wherein, when running on the computer, the program stored on the storage medium causes the computer to perform a further step comprising: operating the associated medical treatment apparatus to generate, in accordance with the treatment plan, the therapeutic treatment beam for the selective delivery of the therapeutic treatment beam to the anatomical body part of the associated patient.

7. A medical treatment apparatus for determining a treatment plan to be used for selective delivery of a therapeutic treatment beam to an anatomical body part moving in a body of an associated patient due to a breathing motion of the associated patient, the medical treatment apparatus comprising: a processor; an imaging device operable by the processor of the medical treatment apparatus at a first imaging frequency to obtain a first dataset comprising a first sequence of images representative of the anatomical body part of the associated patient which have been taken at different first times during the moving of the body part due to the breathing motion of the associated patient at the different first times, wherein the first imaging frequency defines a first cycling rate of an image acquisition beam directed by the imaging device at the anatomical body part during the breathing motion of the associated patient at the different first times, wherein the first sequence of images comprises a first plurality of segment bins of image, sequence portions of the first sequence of images; determining, for each segment bin of image sequence portions of the first plurality of segment bins, being a different respiratory state of the breathing motion of the associated patient at the different first times; wherein the imaging device is operable at a second imaging frequency to obtain a second dataset comprising a second sequence of images representative of the anatomical body part of the associated patient which have been taken at different second times during a moving of the body part due to the breathing motion of the associated patient at the different second times, wherein the second imaging frequency defines a second cycling rate of the image acquisition beam directed by the imaging device at the anatomical body part during the breathing motion of the associated patient at the different second times, wherein the second sequence of images comprises a second plurality of segment bins of image sequence portions of the second sequence of images, each segment bin of image sequence portions of the second plurality of segment bins corresponding to a different unknown respiratory state of the breathing motion of the associated patient at the different second times, wherein the second imaging frequency is an integer multiple of the first imaging frequency; a non-transient memory device storing the first and second datasets; wherein the processor is operative to determine a match between two or more of the first plurality of segment bins of image sequence portions described by the first dataset and two or more of the second plurality of segment bins of image sequence portions described by the second dataset; wherein the processor is operative to assign the determined respiratory state of the two or more of the first plurality of segment bins to the two or more of the second plurality of segment bins matching the two or more of the first plurality of segment bins; wherein the processor is operative to determine the treatment plan to be used for selective delivery of the therapeutic treatment beam to the anatomical body part during the second times in accordance with the assigned determined respiratory states; and a source of the therapeutic treatment beam, the source being responsive to an output signal generated by the processor to selectively generate and deliver the therapeutic treatment beam to the anatomical body part of the associated patient during the different second times based on the treatment plan.

8. The medical treatment apparatus according to claim 7, wherein the second dataset is shifted with respect to the first dataset in time to determine a correlation or matching value as the match.

9. The medical treatment apparatus according to claim 7, wherein the first dataset is a 4D computer tomography (4D CT) dataset.

10. The medical treatment apparatus according to claim 7, wherein a digital reconstructed radiograph (DRR) is reconstructed from each three-dimensional dataset of the 4D CT dataset.

11. The medical treatment apparatus according to claim 7, wherein: the imaging device is operable by the processor at the first and second imaging frequencies to obtain the first and second datasets such that the second imaging frequency is the same as the first imaging frequency or wherein the second imaging frequency is an integer multiple of the first imaging frequency.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) FIG. 1 is a diagram illustration of a device used for radiotherapy controlled according to the invention;

(2) FIGS. 2A to 2C show a respiratory curve being divided into respiratory states;

(3) FIGS. 3A to 3C illustrate methods for DTS image reconstruction

(4) FIG. 4 is a flowchart illustrating a method for determining the respiratory state;

(5) FIGS. 5A to 5C illustrate a registration procedure performed according to an embodiment of the invention;

(6) FIG. 6 shows the matching of a sequence to treatment bins;

(7) FIGS. 7A to 7C show the fine adjustment using intensity-based registration;

(8) FIG. 8 shows the fitting of a trajectory through sample point;

(9) FIG. 9 shows the segmentation of the trajectory of FIG. 8 into treatment bins;

(10) FIGS. 10A and 10B illustrate the generation of treatment parameters;

(11) FIG. 11 shows the contour-based detection of a planning target volume; and

(12) FIGS. 12A to 12C illustrate the reconstruction of object data.

DETAILED DESCRIPTION OF PREFERRED EMBODIMENTS

(13) As shown in FIG. 1, a patient is positioned on a treatment table. An irradiation device, such as a linear accelerator, can be moved with respect to the patient. An x-ray source being positioned on one side of the patient emits x-rays in the direction of an x-ray detector positioned on the opposing side to obtain 2D images of a region of interest of the patient. The x-ray source and the x-ray detector can be connected to the beam source or linear accelerator or can be movable independent thereof.

(14) As shown in FIG. 1, external markers, such as reflecting spots, are connected or sticked to the surface, such as the chest, of the patient. The reflections of the external markers can be detected by a tracking system, which generates as an output a respiratory curve as shown in FIG. 2.

(15) FIG. 2A shows a respiratory curve generated from a sequence of images referred to as sample points.

(16) As shown in FIGS. 2B and 2C, the respiratory curve can be segmented into several different states, being for example inhaled, nearly inhaled, intermediate 1, intermediate 2, nearly exhaled and exhaled.

(17) By moving the x-ray detector shown in FIG. 1 relative to the patient, a series of images is taken, wherein the position of the x-ray detector and the time at which the respective image is taken is recorded. Using the information from the respiratory curve acquired simultaneously with the image acquisition by the x-ray detector, a series of images taken from different positions or angles can be collected or stored for each respiratory state.

(18) FIGS. 3A to 3C show as an exemplary embodiment the respiratory state “nearly inhaled”, where a series of images is taken under respective different angles at the same or at later or earlier respiratory states “nearly inhaled” of a different cycle during some full breathing cycles. The circle representing a 360 degree angle corresponding to the camera position as shown in FIG. 3A is divided into 8 segments. After the image acquisition with the x-ray detector is finished, it is determined in which of the 8 segments the biggest accumulation of images being shown as small circles is.

(19) FIG. 3B shows the determined segment found to include the largest number of images being the segment from which the DTS is computed in the next step. The plane perpendicular to the bisector of the selected segment is the plane of the tomographic image to be computed, as shown in FIG. 3C.

(20) Thus, tomographic images can be computed for multiple respiratory states by repeating the steps explained with reference to FIG. 3 for every single respiratory state. Using the known camera parameters of every tomographic image (angle of bisector) and the segmentation data of the corresponding respiratory state (bin), the shape of the target can be computed and can be superimposed on the image. Deviations can be compensated for using an intensity-based registration to obtain the accurate position of the target in every tomographic image. Preferably intensity-based registration includes only a rigid transformation. However, it is also possible to perform an elastic registration.

(21) To ensure robust registration results, a second tomographic image, perpendicular to the existing one, can be taken into account for the same respiratory state, as shown in FIG. 3C with the arrow DTS 2. For example, if the main direction of tumor motion is the same as the viewing direction of the reconstructed DTS image, it will be very difficult to get accurate registration results. But if a further image taken from another viewing angle (e.g. +90 degrees) is taken into account, this problem can be solved, so that 3D information is obtained.

(22) FIG. 4 shows a registration procedure to match a sequence of 2D images to a previously recorded dataset, such as a 4D volume scan of a patient.

(23) According to the shown embodiment, the 2D image sequence is acquired with the same frequency, so that the sequence can be matched to the 4D volume scan, as explained hereinafter with reference to FIG. 5.

(24) If the time span of an average respiratory cycle of a specific patient is for example about five seconds and a 4D volume scan consists of 8 bins, the images of the sequence should be taken every (5000 ms/(8×2−1))=333 ms.

(25) FIG. 5 shows the registration method for matching the 2D image sequence Seq 1, Seq 2, Seq 3 to the 4D CT sequence Bin 1, Bin 2, Bin 3, Bin 4, Bin 3, Bin 2, . . . .

(26) The bold line shown below the respective designation of the sequence or Bin should symbolize the state of the diaphragm being a possible indicator for the respiratory state.

(27) As can be seen in FIGS. 5A and 5B, there is no match between the respective sequence and the bins. The sequence is shifted with respect to the bins until a match is reached, as shown in FIG. 5C.

(28) The registration is preferably performed 2D to 2D, i.e. a pre-generated DRRs shall be matched to n images of the sequence. The accumulate similarity measure values shall be optimised and the best match sorts the images of the sequence to the respiratory states of the 4D volume scan.

(29) Similarity measures are known from the above mentioned K. Berlinger, “Fiducial-Less Compensation of Breathing Motion in Extracranial Radiosurgery”, Dissertation, Fakultät für Informatik, Technische Universität München; which is included by reference. Examples are Correlation Coefficients or Mutual Information.

(30) When using stereo x-ray imaging, this procedure can be performed twice, i.e. for each camera, to further enhance the robustness by taking into account both results.

(31) Preferably, the two x-ray images of the pair of x-ray images are perpendicular to each other and are taken simultaneously. To perform the 2D/4D registration, several independent 2D/3D registration processes using e.g. DRRs can be performed. Both x-ray images are successively matched to all bins of the 4D CT and the best match yields the respiratory states.

(32) As shown in FIG. 2A, the images of the sequence and their position in time of the corresponding respiratory curve is depicted. The respiratory curve from IR is used to select one image per treatment bin (respiratory state) and to sort the images by the respiratory state, as shown in FIG. 2C. All points on the respiratory curve are sample points where an x-ray image has been taken. The sample points marked with an “x” additionally serve as control points for segmenting the trajectory computed afterwards.

(33) The sequence is matched to the treatment bins, as shown in FIG. 6. The images of the sequence are moved synchronously over the treatment bins (DRRs) and the accumulated similarly measure is optimised.

(34) The result sorts every single image to a bin and therefore to a respiratory state. The isocenters of the bins serve as control points of the trajectory, i.e. the isocenters were determined in the planning phase.

(35) If no 4D CT is available (3D case), the planning target volume (PTV) can be manually fitted to some well distributed single images. In the 3D and 4D case, the contour of the PTV can be interpolated geometrically over all images of the sequence.

(36) FIG. 7A shows an example, where the first and the last contour match is known and between these images the interpolation is performed, yielding an approximate match.

(37) Fine adjustment using intensity-based registration can be performed for every single image, so that no sequence matching is performed.

(38) FIG. 7B shows that the intensity of the target is now taken into account.

(39) FIG. 7C shows the thereby reached perfect match.

(40) Finally, visual inspection can be performed by the user and if necessary manual correction can be performed.

(41) So the position of the PTV in every single image can be determined, which can be used to define a trajectory in the next step.

(42) For generating the parameters for treatment (4D), a trajectory is fitted through the sample points, as shown in FIG. 8, and the control points are used, wherein the trajectory is divided into (breathing phase) segments, as shown in FIG. 9.

(43) Images located between two control points (marked as ‘x’ in FIGS. 8 and 9), are sorted to a respiratory state or control point by matching these to the two competing bins. The image is assigned to the best matching control point. After this sorting procedure is completed, the segments can be determined as visualized in FIG. 9. Each segment stands for a specific respiratory state and therefore treatment bin.

(44) To assist in the adding of trajectory segments to a chasing area (the chasing area is the area where the beam actually follows the target, outside this area the beam is switched off (gating)), the standard deviation from sample points of a specific segment to the trajectory taking into account the relative accumulation should be minimized. It is advantageous to find the most “stable” or best reproducible trajectory or trajectories to be used for later treatment by irradiation. Having determined the best reproducible trajectories, the treatment time can be minimized since the beam can be quite exactly focussed while largely sparing out healthy tissue.

(45) Regions neighboring critical bins (segments) are omitted User control: Visualization of DRR of specific bin with organs at risk (OAR) and isodoses drawn in Treatment time Expected positioning deviation (how “reproducible” is a trajectory)

(46) For generating the parameters for treatment (3D) the following steps are performed: Fitting of trajectory through sample points Definition of beam-on area in IR respiratory curve Computation of trajectory segment (chasing area) based on sample points located in the beam-on area (see FIG. 10) Display of trajectory segments with high standard deviations Display of expected treatment time Display of the selected trajectory segment Manual readjustment to optimize treatment time, standard deviations and chasing area Automatical determination of the isocenter (sort of reference isocenter with respect to chasing trajectory) If necessary, export to treatment planning system (TPS) for plan-update

(47) The treatment in the 3D and 4D case have as input: Gained correlation of IR-signal and trajectory segment(s) Isocenter

(48) Procedure: Positioning of the determined patient isocenter to the machine isocenter Continuously recording of IR-signal and transferring the signal into position on the trajectory Within the segment to treat: chasing; outside: gating Use gating (beam off) if an error occurs in the above computations, e.g.: IR marker is not visible Changed pattern of the marker geometry No corresponding trajectory position to current signal in correlation model It is possible to take verification shots Based on trajectory position drawing in of the planning target volume (PTV) to enable a visual inspection and if necessary an intervention It is possible to continuously take images during treatment (yields sequence with lower frequency) To document treatment To permanently check and update trajectory automatically Export information to TPS for possible plan-update

(49) Error handling, e.g. during treatment, can have as input: old image sequence new image sequence

(50) Procedure: A) Displaced respiratory curve/Unchanged trajectory i. Registration of old and new sequence (Algorithm can be close to that described with reference to FIGS. 2C and 6, but instead of the DRR sequence the old sequence is used) ii. Showing tumor positions of old sequence in new one custom character PTV matches to new images custom character Correlation between IR-Signal and trajectory will be updated B) Changed trajectory i. Registration of old and new sequence (see above) ii. Automatic detection if an update is necessary: indicator is a towards inhalation falling similarity measure value (see e.g. K. Berlinger, “Fiducial-Less Compensation of Breathing Motion in Extracranial Radiosurgery”, Dissertation, Fakultät für Informatik, Technische Universität München; section 2.3.3) custom character Automatic image fusion (image to image, not whole sequence as described when generating the sample points of the treatment trajectory) to get updated tumor positions and therefore the updated trajectory.

(51) Incremental Setup of Gating and/or Chasing (for Example Treatment on a Different Day) A) First fraction: as described so far, the DRR sequence generated from the treatment bins is used for the initial sequence matching (as described when generating the sample points of the treatment trajectory; FIGS. 2C and 6). B) Later fractions: instead of the DRR sequence, the sequence of the last fraction can be used for the initial registration procedure.

(52) For a plan-update the following can be done: A) Recommended trajectory segment (chasing area) is different from initially planned bin (when using 4D-CT a bin is equivalent to a trajectory segment) a. Selection of the recommended bin for treatment b. Planning of new beam configuration taking into account changed relative position and orientation of PTV and OARs to each other B) Update of the planned dose distribution a. Detection of the actual PTV position in the control images using intensity-based registration (as described when generating the sample points of the treatment trajectory) b. Computation of the dose distribution actually applied to the target c. Taking these results into account, update the beam configuration in a way to reach the originally wanted dose distribution

(53) Image subtraction can be performed to enable a detection of the tumor in every single verification shot. Thus, there is no need for using implanted markers anymore. An initially taken image sequence of the respiratory cycle forms the basis of this approach. The thereby gained information is stored in an image mask. Applying this mask to any new verification shot yields an image which emphasizes the contour of the tumor. The moving object is separated from the background.

(54) There are two ways to generate the mask 1. Compute a mean image of the sequence by averaging the pixel values of the sequence. That means for every pixel of the destination image:

(55) I Mask ( x , y ) = 1 n .Math. i = 1 n Seq i ( x , y ) The average image has to be subtracted from the verification shot to obtain the image with emphasized target contour. 2. Compute a maximum image of the sequence. That means for every pixel of the destination image:
I.sub.Mask(x,y)=MAX.sub.i=1.sup.n(Seq.sub.i(x,y)) In this case the verification shot has to be subtracted from the maximum image to obtain the image with emphasized target contour.

(56) For contour-based PTV detection, as shown in FIG. 11, the known contour of the target and an x-ray image containing the target is used as input. The procedure includes the steps: Applying an edge detector to the X-ray image (e.g. Canny Edge) Matching of the contour to the edge image Optimize similarity measure value

(57) Cone-Beam Raw-Data can be used for Sequence Generation having as input raw images of Cone-Beam imaging with known camera position; and the infrared signal. An image sequence with known respiratory states can be obtained: Images are not located in the same plane, but with the known camera parameters this sequence can be matched to a 4D CT, as described when generating the sample points of the treatment trajectory. Furthermore, the Cone-Beam volume is received as output.

(58) Cone-Beam of moving objects can have as input raw images of Cone-Beam imaging with known camera position; and expected position of PTV for every raw image (e.g. based on 4D CT scan and IR signal during Cone Beam acquisition).

(59) As output the reconstructed Cone Beam dataset can be obtained.

(60) The advantage of this reconstruction method is to properly display an object that was moving during the acquisition of the raw images.

(61) During the acquisition of Cone Beam raw images the objects are projected to the raw images. In FIG. 12A below the non-moving object (black circle) is at the same position C+D during the acquisition of two raw images. It is projected to position C′ and D′ on the raw images. Another object (hollow circle) moves during acquisition. It is a different position A and B during acquisition of the two raw images. It is projected to position A′ and B′ in the raw images.

(62) During a conventional reconstruction, a mathematical algorithm solves the inverse equation to calculate the original density of the voxels. For non-moving objects like the filled black circle in FIG. 12B, the reconstruction result is of sufficient quality. If the object moves during acquisition of the raw images, the reconstruction quality is degraded. The object at position C′ and D′ is properly reconstructed to position C+D in the voxel set. Accordingly the Cone Beam data set will display the black circle (F). The hollow circle at positions A′ and B′ in the images is not properly reconstructed because position A and B differ. The voxel set will show a distorted and blurred object E.

(63) The new reconstruction algorithm shown in FIG. 12C takes the position C+D during acquisition into account. It calculates the projection parameters of the Object (hollow circle) to the raw images. These parameters depend on the object's position during acquisition of the images. By doing this the beams through the object on the raw images (A′ and B′) will intersect at the corresponding voxel in the Cone Beam data set (A+B)). The object is reconstructed to the correct shape G. Instead the stationary object is now distorted to the shape H.