Guiding system for positioning a patient for medical imaging
10154823 · 2018-12-18
Assignee
Inventors
Cpc classification
A61B8/40
HUMAN NECESSITIES
A61B6/0492
HUMAN NECESSITIES
A61B6/04
HUMAN NECESSITIES
International classification
A61B6/04
HUMAN NECESSITIES
A61B6/00
HUMAN NECESSITIES
Abstract
The present invention relates to positioning guidance for image acquisition. In order to facilitate positioning of a patient for medical image acquisition, a guiding system (14) is provided. The system comprises a patient detecting device (22) and a patient position prescribing device (24). The patient detecting device is configured to detect an anatomy of interest (36) of a patient for image acquisition and to detect current spatial information of the anatomy of interest. The patient position prescribing device is configured to provide an initial target position (40) for the detected anatomy of interest, wherein the initial target position is provided as a reference for the image acquisition. The patient position prescribing device is further configured to register the initial target position with the current spatial information, and to determine an adapted target position (42) by adapting the initial target position based inter alia on the current spatial information.
Claims
1. A guiding system for positioning a patient for medical image acquisition, the system comprising: a patient position detecting device; and a patient position prescribing device; wherein the patient position detecting device is configured: to detect an anatomy of interest of a patient for the medical image acquisition; and to detect current spatial information of the anatomy of interest; and wherein the patient position prescribing device is configured: to provide an initial target position, being a virtual scene of a virtual patient in the correct position, for the detected anatomy of interest during the image acquisition and to register a subset of the initial target position with the current spatial information of the anatomy of interest insofar said subset is clinically irrelevant for the medical image acquisition thereby providing for an adapted target position.
2. Guiding system according to claim 1, wherein the patient position prescribing device is configured to generate image data of the adapted target position; and comprising a display device configured to display the generated image data of the adapted target position in an overlaid manner with the anatomy of interest.
3. Guiding system according to claim 2, wherein the adapted target position comprises augmented reality information; and wherein the augmented reality information comprises a 3D virtual model of the detected anatomy of interest.
4. Guiding system according to claim 2, wherein the display device is configured to display the adapted target position partly opaque.
5. Guiding system according to claim 4, wherein the display device comprises a head-wearable display, through which a user can view the anatomy of interest, wherein the adapted target position is provided in an overlaid manner with the anatomy of interest viewed through the head-wearable display; and/or a monitor, wherein the patient detecting device is configured to provide a visual representation of the anatomy of interest, and wherein the adapted target position is provided in an overlaid manner with said visual representation.
6. Guiding system according to claim 1, wherein the current spatial information comprises a current position, a current pose and/or a current size.
7. Guiding system according to claim 1, wherein the initial target position is associated with a predetermined examination type selected from: a list in a database that comprises a plurality of examination types; an electronic scheduling system; and/or a plurality of tokens, each token associated with a respective predetermined examination type.
8. A medical imaging arrangement, comprising: a guiding system according to claim 1; and an image acquisition system comprising a medical imaging source and a medical imaging detector; wherein the medical imaging source is configured to provide an imaging field detected by the medical imaging detector.
9. Medical imaging arrangement according to claim 8, wherein the medical imaging arrangement comprises at least one of the group of: an X-ray imaging arrangement; and an ultrasound imaging arrangement.
10. Medical imaging arrangement according to claim 9, wherein the medical imaging arrangement is further configured to provide and display: a graphical target source representation comprising augmented reality information indicating a target position of the medical imaging source for the image acquisition; and/or a graphical target detector representation comprising augmented reality information indicating a target position of the medical imaging detector for the image acquisition.
11. A method for guiding positioning of an anatomy of interest of a patient, comprising: detecting an anatomy of interest of a patient for a medical image acquisition; detecting current spatial information of the detected anatomy of interest; providing an initial target position, being a virtual scene of a virtual patient in the correct position, for the detected anatomy of interest during the medical image acquisition; adapting a subset of the initial target position with the current spatial information of the anatomy of interest insofar said subset is clinically irrelevant for the medical image acquisition; and determining an adapted target position based on said adaptation.
12. Method according to claim 11, further comprising: generating image data of the adapted target position; and displaying the generated image data of the adapted target position in an overlaid manner with the anatomy of interest.
13. Method according to claim 11, further comprising: providing and displaying a graphical target source representation comprising augmented reality information indicating a target position of the medical imaging source for the image acquisition, and/or a graphical target detector representation comprising augmented reality information indicating a target position of the medical imaging detector for the image acquisition.
14. A non-transitory computer readable medium having one or more executable instructions stored thereon, which when executed by a processor, cause the processor to perform a method for guiding positioning of an anatomy of interest of a patient, the method comprising: detecting an anatomy of interest of a patient for a medical image acquisition; detecting current spatial information of the detected anatomy of interest; providing an initial target position, being a virtual scene of a virtual patient in the correct position, for the detected anatomy of interest during the medial image acquisition; adapting a subset of the initial target position with the current spatial information of the anatomy of interest insofar said subset is clinically irrelevant for the medial image acquisition; and determining an adapted target position based on said adaption.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1) Exemplary embodiments of the invention will be described in the following with reference to the following drawings:
(2)
(3)
(4)
(5)
(6)
DETAILED DESCRIPTION OF EMBODIMENTS
(7)
(8) It should also be noted that although the following discussion is related to an X-ray imaging system, i.e. an example of the medical imaging arrangement 10, the principle is also applicable to other imaging modalities, for example, ultrasound, MRI, CT, etc. However, for simplicity, the guiding in other medical imaging modalities is not further discussed.
(9) The medical imaging arrangement 10 comprises an image acquisition system 12 (e.g. an X-ray acquisition system shown in
(10) The medical acquisition system 12 comprises a medical imaging source 16 (e.g. an X-ray source as shown in
(11) The guiding system 14 comprises a patient detecting device 22 and a patient position determination device 24.
(12) The patient detecting device 22 may be capable of detecting a patient's external contour. For example, the patient detecting device 22 may be a depth camera, an infrared camera, an ultrasound sensor, etc. The patient detecting device 22 may also be capable of detecting an inner anatomy of interest, such as bones or a target organ. For example, the patient detecting device 22 may be an X-ray image acquisition device, which acquires the image of the inner anatomy of interest by generating an X-ray pre-shot.
(13) The position determination device 24 may relate to a computing device, for example, a built-in computing unit, a processor, or a desktop computer.
(14) An optional display device 26 may be provided, which may relate to any suitable displays for visualization of the information. Examples include monitors, hand held devices and HMD (head-mounted display).
(15) The patient detecting device 22, the patient position determination device 24, and the display device 26 may be connected via any suitable ways including wireless communication (e.g. Bluetooth or WLAN (wireless local area network)) or wired communication (e.g. via cables).
(16) For example, the guiding system 14 in
(17) In a further example (not further shown), the head-wearable guiding system 28 comprises the integrated camera 30 and the head-wearable display 34, whereas the patient position prescribing device 20 is provided as a high-performance desktop computer communicated with the integrated camera 30 and the head-wearable display 34 via WLAN, for example. In this manner, the head-wearable guiding system 28 may provide direct volume rendering without sacrificing the speed of frame rates due to the hardware support of the high performance desktop computer.
(18) The patient detecting device 22 is configured to detect an anatomy of interest 36, such as a hand 38, of the patient 15 for image acquisition, and to detect current spatial information (e.g. position, pose, or size) of the anatomy of interest 36.
(19) The patient position prescribing device 24 is configured to provide an initial target position 40 (see an example in
(20) The initial target position 40 may also be referred to as reference position, which refers to a virtual scene of a virtual patient in the correct position, or right pose. There are several methods to create virtual patient in the right poses (i.e. to create the initial target position or reference position). In an example, a patient is modeled in 3D as it is done in computer animated films. Another example is to record a correctly positioned patient (or model) with a camera (e.g. a 3D surface camera). The initial target position 40 of an inner anatomy of interest, such as a tumor, may be realized by using the previously recorded data, e.g. from the planning CT scan.
(21) In an example, the initial target position 40 is associated with a predetermined examination type. During positioning procedure, the predetermined examination type is selected from i) a list in a database that comprises a plurality of examination types, ii) an electronic scheduling system, and/or iii) a plurality of tokens, each token associated with a respective predetermined examination type.
(22) In other words, a user (e.g. a radiographer) may record scenes as initial target positions and add them to the database, when the patient is correctly positioned. The database may comprise a variety of anatomical structures of a patient including external anatomies (e.g. legs or hands) or inner anatomies (e.g. bones). The database may further comprise a set of virtual patients with different sex, weight, height, age, etc. Repeating the modeling or recording for each foreseen imaging situation may fill a database of virtual scenes.
(23) During positioning procedure, a user (e.g. a radiographer) may select the planned examination type from a list, such as ankle, subtalar joint, right, oblique-lateral. The examination type may also come automatically from a scheduler or an electronic scheduling system that integrates the radiology workflow. The virtual scene with the virtual patient may be retrieved from the database and displayed e.g. in the head-wearable display. Alternatively, there may be different tokens (markers) for different examination types. To each token a different virtual scene is attached, which corresponds to the examination type. This means, instead interacting with a computer (e.g. selecting the planned examination type from a list), a radiographer may use physical objects, e.g. the tokens, to select the desired examination type.
(24) The adapted target position 42 may be referred to as modeled target position. Since it may be the position actually observed by an observer during the alignment, the adapted target position 42 may also be referred to as observed position. Hence, the patient position prescribing device 24 is configured to bring the target position from the reference position (i.e. the initial target position 40) to the observed position (i.e. adapted target position 42), and the observed position better matches the current spatial information.
(25) The registration may be performed by analyzing the anatomy of interest 36 and estimating the positions of certain landmarks, e.g. body joints. These landmarks can be registered with the corresponding landmarks in the graphical target anatomy representation.
(26) Alternatively, the registration may be carried out by attaching tokens or markers on certain positions of the anatomy of interest or the patient. These tokens or markers may be tracked, e.g. by a camera. When the token or marker is moved (i.e. the pose of the anatomy is changed), the virtual patient adapts to the movement of the token or marker as well.
(27) Optionally, the patient position prescribing device 24 is further configured to generate image data of the adapted target position 42.
(28) The optional display device 26 is configured to display image data of the adapted target position 42 in an overlaid manner with the anatomy of interest 36 as graphical positioning information for alignment of the anatomy of interest 36 for the image acquisition.
(29) The image data of the adapted target position 40 may be visualized in several ways. In an example, the adapted target position (e.g. a 3D virtual model) is displayed with an opaque wireframe (or contour or outline) around it. In a further example, the adapted target position is an opaque surface model.
(30) In an example, the display device 26 is a head-wearable display 34 (as shown in
(31) This has the advantage that the person preparing the patient for the acquisition is using his/her natural visual field.
(32) In a further example (not shown), the display device 26 is a monitor. The patient detecting device 22 is configured to provide a visual representation of the anatomy of interest in its current position. The adapted target position 42 is configured to be provided in an overlaid manner with the visual representation.
(33) The monitor may also be provided to show the patient from another perspective in addition to the head-wearable display. For example, the monitor may be placed at the operating device of the X-ray machine. A patient detecting device, such as a depth camera, may be attached to the X-ray machine at a fixed position. The user may have an occasional look at the monitor to assure proper patient positioning.
(34) The adapted target position 42 is provided for guiding in positioning the anatomy of interest 36 of the patient 15 relative to the medical imaging arrangement for image acquisition.
(35) It is noted that the display device 26 is only provided as an option. In a further example (not shown), the guiding system 14 is not provided with a display device. For example, the adapted target position 42 is determined, which is used to control a movable anatomy support (e.g. an imaging coil for MRI) to automated re-arrange (re-align) the anatomy of interest for the image acquisition. In a further example, the adapted target position 42 is determined, and a voice command is provided for guiding in positioning the anatomy of interest, for example, moving your forearm 2 cm to the left. In such examples, the display of the adapted target position would not be necessary.
(36)
(37) As indicated above, the initial target position 40 is a virtual scene of a virtual patient in the correct position, or right pose. The initial target position 40 may be obtained by recording a correctly positioned patient (or model) with a camera (e.g. a 3D surface camera) and stored in a database. Hence, in the initial target position 40, the position of the virtual patient (or anatomy) may be fixed to a reference structure (e.g. an X-ray detector). Although the initial target position 40 may be used for guiding purposes, this practically leads to some complications. For example, a patient has to be positioned exactly like the virtual patient, which may be not necessarily acquired for the acquisition.
(38) For example, the initial target position 40 in
(39) The adapted target position 42 (indicated with dotted lines) in
(40) In an example, shown as an option in
(41) The term position relates to translational position (or liner position) as well as angular position (or orientation or rotation) of a virtual patient, which may include one or a plurality of anatomies. For example, the correctly posed virtual limb 44 may be rotated around an axis perpendicular to the medical imaging detector 18 (e.g. X-ray detector as shown in
(42) In a further example, shown as an option in
(43) The term pose relates to the relative position between the anatomies inside a virtual patient. For example, the pose of the correctly posed virtual limb 44 may relate to the right-angle flexion at the ankle joint.
(44) It is noted that only those aspects of the patient pose being relevant for the acquisition are preserved (i.e. unchanged), whereas irrelevant aspects are registered (i.e. adapted) with the patient.
(45) Taking
(46) In the example of
(47) In a further example, shown as an option in
(48) In other words, the adapted target position 42 may be obtained with some scaling parameters based on the size of the detected anatomy of interest 36. For example, the adapted target position of a limb may be rendered large in case of a large patient. In the example of
(49) This allows the positioning information to better represent patients with different shapes, such as tall or small patients, bold or slim patients. Even for the same patient, changes to patient geometry are likely to occur as their weight changes. Therefore, when the virtual patient and the real patient agree in the size of the anatomy of interest (e.g. the patient's limb 46 in
(50) As a further option, the image data of the adapted target position 42 is partly opaque such that the relative position in depth between the adapted target position 42 and the anatomy of interest 36 is perceptible. This provides occlusion, which is a visual clue for an observer to estimate depth, especially the relative depth of two objects (adapted target position 42 and anatomy of interest 36) to each other, even in monoscopic view. Here, the patient pose is determined in the scene and thus the depth information is available for it. This allows scene rendering where patient surface and model surface occlude each other locally, according to their relative positions. Such rendering allows a more precise visual guidance for positioning than transparent augmented reality alone. In addition, it does not rely on stereoscopic depth perception as the only depth clue. This is an important advantage, since a portion of the population have no or impaired stereoscopic vision.
(51) As a further option, the medical imaging arrangement 10 is further configured to provide and display a graphical target source representation 48 comprising augmented reality information indicating a target position of the medical imaging source 16 for the image acquisition, and/or a graphical target detector representation (not further shown) comprising augmented reality information indicating a target position of the medical imaging detector 18 for the image acquisition. For example, the graphical target source representation comprises an arrow 50, shown as part of the virtual scene describing the correct position of the medical imaging source 16 (see
(52) Although not illustrated, the graphical target source representation 48 may also comprise a 3D virtual model of the medical imaging source 16 (e.g. an X-ray imaging source as shown in
(53)
(54) Step a) may also be referred to as detecting or monitoring step.
(55) Step b) may also be referred to as tracking step, which relates to the spatial information estimation (position estimation, pose estimation, and/or size estimation) of anatomy of interest.
(56) Steps c) and d) together may also be referred to registration step.
(57) Step e) may also be referred to as adaption step.
(58) Step g) may also be referred to as visualization and data representation step.
(59) In an example, the adapted target position comprises augmented reality information. The augmented reality information comprises a 3D virtual model of the detected anatomy of interest.
(60) In a further example, shown as an option in
(61) For a better understanding,
(62) In another exemplary embodiment of the present invention, a computer program or a computer program element is provided that is characterized by being adapted to execute the method steps of the method according to one of the preceding embodiments, on an appropriate system.
(63) The computer program element might therefore be stored on a computer unit, which might also be part of an embodiment of the present invention. This computing unit may be adapted to perform or induce a performing of the steps of the method described above. Moreover, it may be adapted to operate the components of the above described apparatus. The computing unit can be adapted to operate automatically and/or to execute the orders of a user. A computer program may be loaded into a working memory of a data processor. The data processor may thus be equipped to carry out the method of the invention.
(64) This exemplary embodiment of the invention covers both, a computer program that right from the beginning uses the invention and a computer program that by means of an up-date turns an existing program into a program that uses the invention.
(65) Further on, the computer program element might be able to provide all necessary steps to fulfil the procedure of an exemplary embodiment of the method as described above.
(66) According to a further exemplary embodiment of the present invention, a computer readable medium, such as a CD-ROM, is presented wherein the computer readable medium has a computer program element stored on it which computer program element is described by the preceding section. A computer program may be stored and/or distributed on a suitable medium, such as an optical storage medium or a solid state medium supplied together with or as part of other hardware, but may also be distributed in other forms, such as via the internet or other wired or wireless telecommunication systems. However, the computer program may also be presented over a network like the World Wide Web and can be downloaded into the working memory of a data processor from such a network. According to a further exemplary embodiment of the present invention, a medium for making a computer program element available for downloading is provided, which computer program element is arranged to perform a method according to one of the previously described embodiments of the invention.
(67) It has to be noted that embodiments of the invention are described with reference to different subject matters. In particular, some embodiments are described with reference to method type claims whereas other embodiments are described with reference to the device type claims. However, a person skilled in the art will gather from the above and the following description that, unless otherwise notified, in addition to any combination of features belonging to one type of subject matter also any combination between features relating to different subject matters is considered to be disclosed with this application. However, all features can be combined providing synergetic effects that are more than the simple summation of the features.
(68) While the invention has been illustrated and described in detail in the drawings and foregoing description, such illustration and description are to be considered illustrative or exemplary and not restrictive. The invention is not limited to the disclosed embodiments. Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing a claimed invention, from a study of the drawings, the disclosure, and the dependent claims.
(69) In the claims, the word comprising does not exclude other elements or steps, and the indefinite article a or an does not exclude a plurality. A single processor or other unit may fulfil the functions of several items re-cited in the claims. The mere fact that certain measures are re-cited in mutually different dependent claims does not indicate that a combination of these measures cannot be used to advantage. Any reference signs in the claims should not be construed as limiting the scope.