IMAGE PROCESSING DEVICE, IMAGE PROCESSING METHOD, AND IMAGE PROCESSING PROGRAM
20230096694 · 2023-03-30
Assignee
Inventors
Cpc classification
International classification
Abstract
A processor derives a first composition image representing a first composition included in a subject including three or more compositions from at least one radiation image acquired by imaging the subject, derives at least one removal radiation image obtained by removing the first composition from the at least one radiation image by using the first composition image, derives a plurality of other composition images representing a plurality of other compositions different from the first composition included in the subject by using the at least one removal radiation image, and derives a composite image obtained by synthesizing the first composition image and the plurality of other composition images at a predetermined ratio.
Claims
1. An image processing device comprising: at least one processor, wherein the processor derives a first composition image representing a first composition included in a subject including three or more compositions from at least one radiation image acquired by imaging the subject, derives at least one removal radiation image obtained by removing the first composition from the at least one radiation image by using the first composition image, derives a plurality of other composition images representing a plurality of other compositions different from the first composition included in the subject by using the at least one removal radiation image, and derives a composite image obtained by synthesizing the first composition image and the plurality of other composition images at a predetermined ratio.
2. The image processing device according to claim 1, wherein the processor acquires a first radiation image and a second radiation image acquired by imaging the subject with radiation having different energy distributions, and derives the first composition image by performing weighting subtraction on the first radiation image and the second radiation image.
3. The image processing device according to claim 1, wherein the processor acquires a first radiation image and a second radiation image acquired by imaging the subject with radiation having different energy distributions, and derives the first composition image from the first radiation image or the second radiation image by using a derivation model that has been subjected to machine learning to derive the first composition image from a radiation image.
4. The image processing device according to claim 2, wherein the processor derives a first removal radiation image and a second removal radiation image obtained by removing the first composition from the first radiation image and the second radiation image by using the first composition image, and derives the plurality of other composition images by performing weighting subtraction on the first removal radiation image and the second removal radiation image.
5. The image processing device according to claim 1, wherein the processor derives the first composition image from one radiation image by using a first derivation model that has been subjected to machine learning to derive the first composition image from the radiation image, derives at least one removal radiation image obtained by removing the first composition from the at least one radiation image by using the first composition image, and derives the plurality of other composition images from one removal radiation image by using a second derivation model that has been subjected to machine learning to derive the plurality of other composition images from the removal radiation image.
6. The image processing device according to claim 1, wherein the processor is able to change the predetermined ratio.
7. The image processing device according to claim 1, wherein the first composition is an artificial object, and the other compositions are a bone part and a soft part.
8. The image processing device according to claim 1, wherein the first composition is an artificial object, and the other compositions are a bone part, fat, and muscle.
9. An image processing method comprising: deriving a first composition image representing a first composition included in a subject including three or more compositions from at least one radiation image acquired by imaging the subject; deriving at least one removal radiation image obtained by removing the first composition from the at least one radiation image by using the first composition image; deriving a plurality of other composition images representing a plurality of other compositions different from the first composition included in the subject by using the at least one removal radiation image; and deriving a composite image obtained by synthesizing the first composition image and the plurality of other composition images at a predetermined ratio.
10. A non-transitory computer-readable storage medium that stores an image processing program causing a computer to execute: a procedure of deriving a first composition image representing a first composition included in a subject including three or more compositions from at least one radiation image acquired by imaging the subject; a procedure of deriving at least one removal radiation image obtained by removing the first composition from the at least one radiation image by using the first composition image; a procedure of deriving a plurality of other composition images representing a plurality of other compositions different from the first composition included in the subject by using the at least one removal radiation image; and a procedure of deriving a composite image obtained by synthesizing the first composition image and the plurality of other composition images at a predetermined ratio.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0018]
[0019]
[0020]
[0021]
[0022]
[0023]
[0024]
[0025]
[0026]
[0027]
[0028]
[0029]
[0030]
DETAILED DESCRIPTION
[0031] Hereinafter, embodiments of the present disclosure will be described with reference to the drawings.
[0032] The imaging apparatus 1 is an imaging apparatus that performs energy subtraction by a so-called one-shot method of converting radiation, such as X-rays, emitted from a radiation source 3 and transmitted through a subject H into energy and irradiating a first radiation detector 5 and a second radiation detector 6 with the converted radiation. During imaging, as shown in
[0033] As a result, in the first radiation detector 5, a first radiation image G1 of the subject H by low-energy radiation including so-called soft rays is acquired. In addition, in the second radiation detector 6, a second radiation image G2 of the subject H by high-energy radiation from which the soft rays are removed is acquired. The first and second radiation images are input to the image processing device 10.
[0034] The first and second radiation detectors 5 and 6 can perform recording and reading-out of the radiation image repeatedly. A so-called direct-type radiation detector that directly receives irradiation with the radiation and generates an electric charge may be used, or a so-called indirect-type radiation detector that converts the radiation into visible light and then converts the visible light into an electric charge signal may be used. In addition, as a method of reading out a radiation image signal, it is desirable to use a so-called thin film transistor (TFT) readout method in which the radiation image signal is read out by turning a TFT switch on and off, or a so-called optical readout method in which the radiation image signal is read out by irradiation with read out light. However, other methods may also be used without being limited to these methods.
[0035] Then, the image processing device according to the first embodiment will be described. First, a hardware configuration of the image processing device according to the first embodiment will be described with reference to
[0036] The storage 13 is realized by a hard disk drive (HDD), a solid state drive (SSD), a flash memory, and the like. An image processing program 12 installed in the image processing device 10 is stored in the storage 13 as a storage medium. The CPU 11 reads out the image processing program 12 from the storage 13, expands the read out image processing program 12 to the memory 16, and executes the expanded image processing program 12.
[0037] Note that the image processing program 12 is stored in a storage device of the server computer connected to the network or in a network storage in a state of being accessible from the outside, and is downloaded and installed in the computer that configures the image processing device 10 in response to the request. Alternatively, the image processing program 12 is distributed in a state of being recorded on a recording medium, such as a digital versatile disc (DVD) or a compact disc read only memory (CD-ROM), and is installed in the computer that configures the image processing device 10 from the recording medium.
[0038] Then, a functional configuration of the image processing device according to the first embodiment will be described.
[0039] The image acquisition unit 21 acquires, for example, the first radiation image G1 and the second radiation image G2 which are the front images of the periphery of the crotch of the subject H from the first and second radiation detectors 5 and 6 by causing the imaging apparatus 1 to perform energy subtraction imaging of the subject H. In a case in which the first radiation image G1 and the second radiation image G2 are acquired, imaging conditions, such as an imaging dose, a radiation quality, a tube voltage, a source image receptor distance (SID) which is a distance between the radiation source 3 and surfaces of the first and second radiation detectors 5 and 6, a source object distance (SOD) which is a distance between the radiation source 3 and a surface of the subject H, and the presence or absence of a scattered ray removal grid are set.
[0040] The SOD and the SID are used to calculate a body thickness distribution as described below. It is preferable that the SOD be acquired by, for example, a time of flight (TOF) camera. It is preferable that the SID be acquired by, for example, a potentiometer, an ultrasound range finder, a laser range finder, or the like.
[0041] The imaging conditions need only be set by input from the input device 15 by an operator.
[0042] Here, each of the first radiation image G1 and the second radiation image G2 includes a scattered ray component based on the radiation scattered in the subject H in addition to a primary ray component of the radiation transmitted through the subject H. Therefore, the image acquisition unit 21 removes the scattered ray component from the first radiation image G1 and the second radiation image G2. For example, the image acquisition unit 21 may remove the scattered ray component from the first radiation image G1 and the second radiation image G2 by applying a method disclosed in JP2015-043959A. In a case in which a method disclosed in JP2015-043959A or the like is used, the derivation of the body thickness distribution of the subject H and the derivation of the scattered ray component for removing the scattered ray component are performed at the same time. Note that the removal of the scattered ray component may be performed by the first derivation unit 22 described below.
[0043] Hereinafter, the removal of the scattered ray component from the first radiation image G1 will be described, but the removal of the scattered ray component from the second radiation image G2 can also be performed in the same manner. First, the image acquisition unit 21 acquires a virtual model of the subject H having an initial body thickness distribution T0(x,y). The virtual model is data virtually representing the subject H of which a body thickness in accordance with the initial body thickness distribution T0(x,y) is associated with a coordinate position of each pixel of the first radiation image G1. Note that the virtual model of the subject H having the initial body thickness distribution T0(x,y) may be stored in the storage 13 of the image processing device 10 in advance. In addition, the image acquisition unit 21 may calculate a body thickness distribution T(x,y) of the subject H based on the SID and the SOD included in the imaging conditions. In this case, the initial body thickness distribution T0(x,y) can be obtained by subtracting the SOD from the SID.
[0044] Next, the image acquisition unit 21 generates, based on the virtual model, an image obtained by synthesizing an estimated primary ray image in which a primary ray image obtained by imaging the virtual model is estimated and an estimated scattered ray image in which a scattered ray image obtained by imaging the virtual model is estimated as an estimated image in which the first radiation image G1 obtained by imaging the subject H is estimated.
[0045] Next, the image acquisition unit 21 corrects the initial body thickness distribution T0(x,y) of the virtual model such that a difference between the estimated image and the first radiation image G1 is small. The image acquisition unit 21 repeatedly performs the generation of the estimated image and the correction of the body thickness distribution until the difference between the estimated image and the first radiation image G1 satisfies a predetermined termination condition. The image acquisition unit 21 derives the body thickness distribution in a case in which the termination condition is satisfied as the body thickness distribution T(x,y) of the subject H. In addition, the image acquisition unit 21 removes the scattered ray component included in the first radiation image G1 by subtracting the scattered ray component in a case in which the termination condition is satisfied from the first radiation image G1. Note that, in the following description, it is regarded that the scattered ray component is removed from the first radiation image G1 and the second radiation image G2.
[0046] The first derivation unit 22 derives an artificial object image Ga that represents a region of an artificial object included in the subject H from the first radiation image G1 or the second radiation image G2 acquired by the image acquisition unit 21. Examples of the artificial object include a metal embedded in the subject H, such as a screw for connecting bones, a catheter inserted into the subject H, a surgical tool, such as gauze forgotten in the body after surgery, and a cast attached to the outside of the subject H. In the present embodiment, the artificial object image Ga representing the metal, such as the screw, attached to a vertebra of the subject H in order to fix the vertebra is derived.
[0047]
[0048] Therefore, the first derivation unit 22 detects a region in which the brightness value is saturated in the first radiation image G1 or the second radiation image G2 as an artificial object region. In the present embodiment, it is regarded that the artificial object region is detected in the first radiation image G1. Moreover, the first derivation unit 22 removes the detected artificial object region from the first radiation image G1, interpolates the removed artificial object region by the pixel values of the surrounding regions, and derives a first interpolated radiation image Gh1. Moreover, the first derivation unit 22 derives the artificial object image Ga obtained by extracting only the artificial object included in the first radiation image G1 by deriving a difference between the corresponding pixels of the first radiation image G1 and the first interpolated radiation image Gh1.
[0049] Note that, it is also possible to extract the artificial object region by using an absorption difference of different radiation energies by the artificial object. In this case, the first derivation unit 22 derives reaching doses IH_0 and IL_0 of the radiation in a direct radiation region (that is, a region irradiated with the radiation without being transmitted through the subject H by the radiation detectors 5 and 6) in the first radiation image G1 and the second radiation image G2. In addition, the reaching doses IH_h and IL_h of the radiation in the subject region in the first radiation image G1 and the second radiation image G2 are derived.
[0050] Moreover, the first derivation unit 22 derives a radiation absorption amount CH by the subject H obtained from the first radiation image G1 by CH=IH_0−IH_h, and derives a radiation absorption amount CL by the subject H obtained from the second radiation image G2 by CL=IL_0−IL_h. Note that, for the reaching dose, the pixel values of the first and second radiation images G1 and G2 are used.
[0051] Here, a ratio CL/CH of the radiation absorption amount between the second radiation image G2 and the first radiation image G1 is larger in the metal than in the tissue of the human body. Therefore, the first derivation unit 22 extracts, as the artificial object region, a region in the first radiation image G1 or the second radiation image G2 in which the ratio CL/CH of the radiation absorption amount is larger than a predetermined threshold value Th1. Note that the threshold value Th1 may be a fixed value, or may be determined in accordance with imaging conditions or the body thickness of the subject H. In this case, a ratio of the radiation absorption amount of the artificial object and a ratio of the radiation absorption rate of the bone part may be derived in advance in accordance with the body thickness, and an intermediate value thereof may be used as the threshold value Th1.
[0052] Moreover, the first derivation unit 22 derives the artificial object image Ga obtained by extracting only the artificial object included in the first radiation image G1 by deriving a difference between the corresponding pixels of the first radiation image G1 and the first interpolated radiation image Gh1.
[0053] Note that the first derivation unit 22 may derive the artificial object image Ga from the first radiation image G1 or the second radiation image G2 by using a derivation model that has been subjected to machine learning to derive the artificial object image Ga from the radiation image. In particular, since many artificial objects in the subject H have a specific shape, the artificial object image Ga can be derived from the first radiation image G1 or the second radiation image G2 by constructing the derivation model to extract a specific shaped region. For example, surgical gauze is woven with radiation absorption threads impregnated with a contrast medium, and in a case in which the surgical gauze is present in the body, the radiation absorption threads are included in the radiation image of the subject while having a characteristic shape. Therefore, by constructing the derivation model to extract the characteristic shape of the radiation absorption threads, it is possible to derive the artificial object image Ga representing the gauze from the radiation image.
[0054] In addition, the first derivation unit 22 may derive the artificial object image Ga obtained by extracting only the artificial object included in the first radiation image G1 and the second radiation image G2 by performing weighting subtraction between the corresponding pixels, on the first radiation image G1 and the second radiation image G2 as shown in Expression (1). Note that, in Expression (1), μa is a weighting coefficient, which is derived in accordance with a radiation attenuation coefficient of the metal in accordance with the radiation energy.
(x,y) are coordinates of each pixel of each image.
Ga(x,y)=G1(x,y)−μa×G2(x,y) (1)
[0055] The removal unit 23 derives a first removal radiation image Gr1 and a second removal radiation image Gr2 by removing the artificial object region from each of the first and second radiation images G1 and G2. Specifically, as shown in Expressions (2) and (3), the first removal radiation image Gr1 and the second removal radiation image Gr2 obtained by removing the artificial object region from the first radiation image G1 and the second radiation image G2 is derived by performing the weighting subtraction between the corresponding pixels, on the first and second radiation images G1 and G2, and the artificial object image Ga. Note that α1(x,y) and α2(x,y) are the weighting coefficients, and are set to values at which the artificial object region can be removed from the first radiation image G1 and the second radiation image G2. In addition, the weighting coefficient is set to 0 in a region outside the artificial object region.
Gr1(x,y)=G1(x,y)−α1(x,y)×Ga(x,y) (2)
Gr2(x,y)=G2(x,y)−α2(x,y)×Ga(x,y) (3)
[0056]
[0057] The second derivation unit 24 derives a bone part image Gb obtained by extracting only the bone part of the subject H included in the first radiation image G1 and the second radiation image G2 and a soft part image Gs obtained by extracting only the soft part by performing the weighting subtraction between the corresponding pixels, on the first removal radiation image Gr1 and the second removal radiation image Gr2, as shown in Expressions (4) and (5). Note that μb and μs in Expressions (4) and (5) are the weighting coefficients, and are derived in accordance with the radiation attenuation coefficient of the bone part and the soft part in accordance with the radiation energy.
[0058] (x,y) are coordinates of each pixel of each image.
Gb(x,y)=Gr1(x,y)−μb×Gr2(x,y) (4)
Gs(x,y)=Gr1(x,y)−μs×Gr2(x,y) (5)
[0059] Note that the second derivation unit 24 may derive the bone part image Gb and the soft part image Gs from the first removal radiation image Gr1 and the second removal radiation image Gr2 by using a derivation model that has been subjected to machine learning to derive the bone part image Gb and the soft part image Gs from the first removal radiation image Gr1 and the second removal radiation image Gr2. In this case, the derivation model that derives the bone part image Gb and the soft part image Gs from the first removal radiation image Gr1 and the second removal radiation image Gr2 can be constructed by training a neural network using teacher data including the first and second radiation images G1 and G2 that do not include the artificial object acquired by the energy subtraction imaging, and the bone part image and the soft part image derived from the first and second radiation images G1 and G2 that do not include the artificial object by the energy subtraction processing.
[0060] The synthesis unit 25 derives a composite image GC0 obtained by synthesizing the artificial object image Ga, the bone part image Gb, and the soft part image Gs at a predetermined ratio. In the present embodiment, the predetermined ratio can be changed in accordance with the purpose of imaging in accordance with an imaging site. For example, in an orthopedic system, in some cases, it is observed whether or not a fixing tool is loose after performing surgery of fixing the bone, such as a thoracic vertebra, a lumbar vertebra, and a femur. For such a purpose of imaging, the synthesis unit 25 derives the composite image GC0 by adding the artificial object image Ga, the bone part image Gb, and the soft part image Gs at a ratio of artificial object image Ga:bone part image Gb:soft part image Gs=100%:100%:0% such that the soft part does not interfere with the fixing tool. In this case, as shown in Expression (6), the composite image GC0 is derived by performing weighting addition between the pixels of the artificial object image Ga, the bone part image Gb, and the soft part image Gs.
GC0(x,y)=1×Ga(x,y)+1×Gb(x,y)+0×Gs(x,y) (6)
[0061] Note that, by lowering a synthesis ratio of the artificial object image Ga, it is possible to reduce the glare of the composite image GC0 caused by the overexposure of the artificial object region, particularly in a case in which the artificial object is a metal. In this case, the synthesis unit 25 need only derive the composite image GC0 by adding the artificial object image Ga, the bone part image Gb, and the soft part image Gs at a ratio of artificial object image Ga:bone part image Gb:soft part image Gs=20%:100%:0%. Note that, in this case, it is preferable that the synthesis ratio can be changed by using the input device 15.
[0062] In addition, after performing surgery of abdomen, in some cases, it is confirmed whether or not a surgical tool, such as gauze used in the surgery, remains in the body. In such a case, in order to prevent the surgical tool from being difficult to see due to the bone part, the synthesis unit 25 derives the composite image GC0 by adding the artificial object image Ga, the bone part image Gb, and the soft part image Gs at a ratio of artificial object image Ga:bone part image Gb:soft part image Gs=100%:0%:100%. In addition, in order to make it easier to see the surgical tool while grasping a positional relationship of the bones, the composite image GC0 may be derived at a ratio of artificial object image Ga:bone part image Gb:soft part image Gs=100%:20%:100%. Further, in order to enhance the surgical tool, the synthesis ratio of the bone part image Gb and the soft part image Gs may be relatively low with respect to the artificial object image Ga. For example, the composite image GC0 may be derived at a ratio of artificial object image Ga:bone part image Gb:soft part image Gs=100%:10%:50%. In this case, as shown in Expression (7), the composite image GC0 is derived by performing the weighting addition between the pixels of the artificial object image Ga, the bone part image Gb, and the soft part image Gs.
GC0(x,y)=1×Ga(x,y)+0.1×Gb(x,y)+0.5×Gs(x,y) (7)
[0063] In addition, in some cases, a lesion, such as a mass in a lung field, is observed for the subject in which the artificial object is embedded. In this case, the lesion can be more easily observed by using only the soft part image Gs that does not include the bone part and the artificial object. Therefore, the synthesis unit 25 derives the composite image GC0 at a ratio of artificial object image Ga:bone part image Gb:soft part image Gs=0%:0%:100%. Note that the bone part image Gb may be included to the extent that it does not interfere with the interpretation of the lesion such that a positional relationship between the lesion and the bone can be grasped. In this case, the synthesis unit 25 need only derive the composite image GC0 at a ratio of artificial object image Ga:bone part image Gb:soft part image Gs=0%:20%:100%.
[0064] In addition, in some cases, it is confirmed whether or not the catheter inserted into the trachea is at a correct position in the trachea. In such a case, in order to prevent the catheter from being difficult to see due to the bone part, the synthesis unit 25 derives the composite image GC0 at a ratio of artificial object image Ga:bone part image Gb:soft part image Gs=100%:0%:100%. In addition, in order to make it easier to see the catheter while grasping a positional relationship of the bones, the composite image GC0 may be derived at a ratio of artificial object image Ga:bone part image Gb:soft part image Gs=100%:20%:100%.
[0065] The display controller 26 displays the composite image GC0 on the display 14.
[0066] Then, processing performed in the first embodiment will be described.
[0067] Subsequently, the second derivation unit 24 derives the bone part image Gb obtained by extracting only the bone part of the subject H included in the first radiation image G1 and the second radiation image G2 and the soft part image Gs obtained by extracting only the soft part by performing the weighting subtraction between the corresponding pixels, on the first removal radiation image Gr1 and the second removal radiation image Gr2 (step ST4). Then, the synthesis unit 25 derives the composite image GC0 obtained by synthesizing the artificial object image Ga, the bone part image Gb, and the soft part image Gs at the predetermined ratio (step ST5). Moreover, the display controller 26 displays the composite image GC0 on the display 14 (step ST6), and the processing is terminated.
[0068] As described above, in the first embodiment, since the composite image GC0 is derived by synthesizing the images of the compositions at the predetermined ratio, it can be easily visually recognize a desired composition in the composite image GC0.
[0069] Then, a second embodiment of the present disclosure will be described.
[0070] The third derivation unit 27 separates a muscle tissue and a fat tissue in the soft tissue of the subject H by using a difference in the energy characteristics of the muscle tissue and the fat tissue, and derives a muscle image Gm and a fat image Gf. As shown in
[0071] Therefore, the third derivation unit 27 separates the muscle tissue and the fat tissue from the soft part image Gs by using the difference in the energy characteristics of the muscle tissue and the fat tissue described above, and derives the muscle image and the fat image from the soft part image Gs.
[0072] Note that a specific method by which the third derivation unit 27 separates muscle and fat from the soft part image Gs is not limited, but as an example, the third derivation unit 27 according to the present embodiment derives the muscle image from the soft part image Gs by Expression (8) and Expression (9). Specifically, first, the third derivation unit 27 derives a muscle percentage rm(x,y) at each pixel position (x,y) in the soft part image Gs by Expression (8). Note that, in Expression (8), μm is the weighting coefficient in accordance with the attenuation coefficient of the muscle tissue, and μf is the weighting coefficient in accordance with the attenuation coefficient of the fat tissue. T(x,y) is the body thickness of the subject H derived in a case in which the scattered ray component described above is removed. In addition, Δ(x,y) represents a concentration difference distribution. The concentration difference distribution is a distribution of a concentration change on the image, which is seen from a concentration obtained by making the radiation reach the first radiation detector 5 and the second radiation detector 6 without being transmitted through the subject H. The distribution of the concentration change on the image is calculated by subtracting the concentration of each pixel in the region of the subject H from the concentration of the direct radiation region in the soft part image Gs.
rm(x,y)={μf−Δ(x,y)/T(x,y)}/(μf−μm) (8)
[0073] Moreover, the third derivation unit 27 derives the muscle image Gm from the soft part image Gs by Expression (9). Note that x and y in Expression (9) are coordinates of each pixel of the muscle image Gm.
Gm(x,y)=rm(x,y)×Gs(x,y) (9)
[0074] Further, the third derivation unit 27 derives the fat image Gf from the soft part image Gs and the muscle image Gm by Expression (10). Note that x and y in Expression (10) are coordinates of each pixel of the fat image Gf.
Gf(x,y)=Gs(x,y)−Gm(x,y) (10)
[0075] Note that the second radiation image G2 may derive the muscle image Gm and the fat image Gf from the soft part image Gs by using a derivation model that has been subjected to machine learning to derive the muscle image Gm and the fat image Gf from the soft part image Gs. In this case, the derivation model that derives the muscle image Gm and the fat image Gf from the soft part image Gs can be constructed by training the neural network using teacher data including the soft part image Gs derived from the first and second radiation images G1 and G2 that do not include the artificial object acquired by the energy subtraction imaging, and the muscle image Gm and the fat image Gf derived from the soft part image Gs as described above.
[0076] In the second embodiment, the synthesis unit 25 derives the composite image GC0 obtained by synthesizing the artificial object image Ga, the bone part image Gb, the muscle image Gm, and the fat image Gf at the predetermined ratio. Note that the composite image GC0 obtained by synthesizing the muscle image Gm and the fat image Gf at a ratio of 100%:100% is the soft part image Gs. Therefore, in the second embodiment, the synthesis unit 25 excludes the soft part image Gs from a target of synthesizing.
[0077] Also in the second embodiment, the predetermined ratio need only be set in accordance with the purpose of imaging. For example, in a case of observing fat mass or muscle mass of the abdomen, the artificial object and the bone part interfere with the observation. In a case of observing the fat mass, the synthesis unit 25 need only derive the composite image GC0 by adding the artificial object image Ga, the bone part image Gb, the muscle image Gm, and the fat image Gf at a ratio of artificial object image Ga:bone part image Gb:muscle image Gm:the fat image Gf=0%:0%:0%:100%.
[0078] In addition, in a case of observing the muscle mass, the synthesis unit 25 need only derive the composite image GC0 at a ratio of artificial object image Ga:bone part image Gb:muscle image Gm:fat image Gf=0%:0%:100%:0%. In addition, in a case of observing the fat mass, in order to grasp a positional relationship with the bone and the organ (mainly muscle), the composite image GC0 may be derived at a ratio of artificial object image Ga:bone part image Gb:muscle image Gm:fat image Gf=0%:10%:20%:100%.
[0079] In addition, in some cases, muscle that supports the bone is evaluated. For example, in a case in which muscle around the hip joint is well developed, dislocation of the hip joint is unlikely to occur, and thus muscle around the hip joint is evaluated in some cases. In this case, the synthesis unit 25 need only derive the composite image GC0 at a ratio of artificial object image Ga:bone part image Gb:muscle image Gm:fat image Gf=0%:100%:100%:0%. Note that, in this case, the muscle tissue can be easily seen by lowering the synthesis ratio of the bone part image Gb. In this case, the synthesis unit 25 need only derive the composite image GC0 at a ratio of artificial object image Ga:bone part image Gb:muscle image Gm:fat image Gf=0%:50%:100%:0%.
[0080] In addition, as the bone density is decreased with aging, the contrast of the muscle is relatively larger than that of the bone. Therefore, it is preferable to suppress the contrast of the muscle such that both the bone and the muscle can be easily seen. In this case, the synthesis unit 25 need only derive the composite image GC0 at a ratio of artificial object image Ga:bone part image Gb:muscle image Gm:fat image Gf=0%:100%:50%:0%.
[0081] Then, processing performed in the second embodiment will be described.
[0082] Subsequently, the second derivation unit 24 derives the bone part image Gb obtained by extracting only the bone part of the subject H included in the first radiation image G1 and the second radiation image G2 and the soft part image Gs obtained by extracting only the soft part by performing the weighting subtraction between the corresponding pixels, on the first removal radiation image Gr1 and the second removal radiation image Gr2 (step ST14). Further, the third derivation unit 27 derives the muscle image Gm and the fat image Gf from the soft part image Gs (step ST15). Then, the synthesis unit 25 derives the composite image GC0 obtained by synthesizing the artificial object image Ga, the bone part image Gb, the muscle image Gm, and the fat image Gf at the predetermined ratio (step ST16). Moreover, the display controller 26 displays the composite image GC0 on the display 14 (step ST17), and the processing is terminated.
[0083] Note that, in each of the embodiments described above, in some cases, the artificial object image Ga, the bone part image Gb, the soft part image Gs, the muscle image Gm, and the fat image Gf are derived by using the derivation model. In this case, only one radiation image may be acquired by imaging. As only one radiation image, it is preferable to use the radiation image acquired from the radiation detector 5 in the imaging apparatus 1 shown in
[0084] In addition, in each of the embodiments described above, the first and second radiation images G1 and G2 are acquired by the one-shot method in a case in which the energy subtraction processing is performed, but the present disclosure is not limited to this. The first and second radiation images G1 and G2 may be acquired by a so-called two-shot method in which imaging is performed twice by using only one radiation detector. In a case of the two-shot method, there is a possibility that a position of the subject H included in the first radiation image G1 and the second radiation image G2 shifts due to a body movement of the subject H. Therefore, in the first radiation image G1 and the second radiation image G2, it is preferable to perform the processing according to the present embodiment after registration of the subject is performed.
[0085] In addition, in each of the embodiments described above, the visceral fat mass distribution is derived by using the first and second radiation images acquired by the system that images the subject H by using the first and second radiation detectors 5 and 6, but the visceral fat mass distribution may be derived from the first and second radiation images G1 and G2 acquired by using an accumulative phosphor sheet instead of the radiation detector. In this case, the first and second radiation images G1 and G2 need only be acquired by stacking two accumulative phosphor sheets, emitting the radiation transmitted through the subject H, accumulating and recording radiation image information of the subject H in each of the accumulative phosphor sheets, and photoelectrically reading the radiation image information from each of the accumulative phosphor sheets. Note that the two-shot method may also be used in a case in which the first and second radiation images G1 and G2 are acquired by using the accumulative phosphor sheet.
[0086] In addition, the radiation in each of the embodiments described above is not particularly limited, and α-rays or γ-rays can be used in addition to X-rays.
[0087] In addition, in each of the embodiments described above, various processors shown below can be used as the hardware structure of processing units that execute various pieces of processing, such as the image acquisition unit 21, the first derivation unit 22, the removal unit 23, the second derivation unit 24, the synthesis unit 25, the display controller 26, and the third derivation unit 27. As described above, the various processors include, in addition to the CPU that is a general-purpose processor which executes software (program) and functions as various processing units, a programmable logic device (PLD) that is a processor whose circuit configuration can be changed after manufacture, such as a field programmable gate array (FPGA), and a dedicated electric circuit that is a processor having a circuit configuration which is designed for exclusive use in order to execute a specific processing, such as an application specific integrated circuit (ASIC).
[0088] One processing unit may be configured by one of these various processors, or may be configured by a combination of two or more processors of the same type or different types (for example, a combination of a plurality of FPGAs or a combination of the CPU and the FPGA). In addition, a plurality of the processing units may be configured by one processor.
[0089] As an example of configuring the plurality of processing units by one processor, first, as represented by a computer, such as a client and a server, there is an aspect in which one processor is configured by a combination of one or more CPUs and software and this processor functions as a plurality of processing units. Second, as represented by a system on chip (SoC) or the like, there is an aspect of using a processor that realizes the function of the entire system including the plurality of processing units by one integrated circuit (IC) chip. In this way, as the hardware structure, the various processing units are configured by using one or more of the various processors described above.
[0090] Moreover, as the hardware structure of these various processors, more specifically, it is possible to use an electrical circuit (circuitry) in which circuit elements, such as semiconductor elements, are combined.