Medical imaging apparatus, medical image processing apparatus, and image processing program
11819351 · 2023-11-21
Assignee
Inventors
Cpc classification
A61B8/5223
HUMAN NECESSITIES
A61B5/055
HUMAN NECESSITIES
G06F18/217
PHYSICS
G06V10/454
PHYSICS
A61B6/5217
HUMAN NECESSITIES
A61B5/7275
HUMAN NECESSITIES
International classification
A61B6/00
HUMAN NECESSITIES
A61B5/00
HUMAN NECESSITIES
A61B5/055
HUMAN NECESSITIES
A61B8/00
HUMAN NECESSITIES
G06F18/21
PHYSICS
G06V10/44
PHYSICS
Abstract
To obtain a predictive model that shows a diagnostic prediction result with higher accuracy and high medical validity. A medical imaging apparatus includes an imaging unit that collects an image signal of an inspection target, and an image processing unit that generates first image data from the image signal and performs image processing of the first image data. The image processing unit includes a feature quantity extraction unit that extracts a first feature quantity from the first image data, a feature quantity abstraction unit that abstracts the first feature quantity to extract a second feature quantity, a feature quantity conversion unit that converts the second feature quantity into a third feature quantity extracted by second image data, and an identification unit that uses the converted third feature quantity to calculate a predetermined parameter value.
Claims
1. A medical imaging apparatus comprising: an imager that collects an image signal of an inspection target; and an image processor that generates first image data from the image signal and performs image processing of the first image data, wherein the image processor includes a feature quantity extractor that extracts a first feature quantity from the first image data, a feature quantity abstractor that uses a plurality of first feature quantities to extract a second feature quantity abstracted from the first feature quantities, a feature quantity converter that converts the second feature quantity into a third feature quantity extracted by second image data different in type from the first image data, an identifier that uses the third feature quantity converted by the feature quantity converter to calculate a predetermined parameter value capable of being determined from the second image data, wherein the feature quantity extractor includes a predictive model learned using the first image data acquired from a plurality of inspection targets, the feature quantity abstractor includes a predictive model learned by combining the plurality of first feature quantities, the feature quantity converter includes a feature quantity conversion model learned using a plurality of combinations of the second feature quantity and the third feature quantity, the identifier includes an identification model learned using a plurality of combinations of the third feature quantity and the parameter value, and wherein the feature quantity conversion model includes a model learned so that an error of a distance between feature quantities contributing more to identification accuracy in the second feature quantity and the third feature quantity mapped on a predetermined space is reduced by an error back propagation method using a predetermined error function.
2. The medical imaging apparatus according to claim 1, wherein the second image data is image data of a pathological image of the inspection target, and the third feature quantity includes a feature of the pathological image.
3. The medical imaging apparatus according to claim 1, wherein the image processor includes a patch processing unit that performs patch processing on image data, and the feature quantity extractor extracts the first feature quantity for each patch of the first image data processed by the patch processing unit.
4. The medical imaging apparatus according to claim 1, wherein the first image includes a plurality of images different in types of imaging apparatuses, imaging conditions, or image types, and the feature quantity extractor extracts the first feature quantity for each of the plurality of images.
5. The medical imaging apparatus according to claim 1, wherein at least one of the first image data and the second image data includes non-image information such as electronic medical record information, various text information, or vital data information.
6. The medical imaging apparatus according to claim 1, wherein the feature quantity conversion model includes two networks of an encoder and a decoder, and when the second feature quantity is input to the encoder, the decoder outputs the third feature quantity.
7. The medical imaging apparatus according to claim 1, wherein the feature quantity conversion model includes a model learned so that an error of a distance between the second feature quantity and the third feature quantity mapped on a predetermined space is reduced by an error back propagation method using a predetermined error function.
8. The medical imaging apparatus according to claim 1, wherein the feature quantity conversion model includes a model learned so that an error between an output of a parameter value calculated by the identifier and teacher data is reduced by an error back propagation method using a predetermined error function.
9. The medical imaging apparatus according to claim 1, wherein the feature quantity conversion model includes a model learned so that an error of a distance between the second feature quantity and the third feature quantity mapped on a predetermined space is reduced and an error between an output of a parameter value calculated by the identifier and teacher data is reduced by an error back propagation method using a predetermined error function.
10. The medical imaging apparatus according to claim 1, further comprising an output unit that displays an image processed by the image processor, wherein the output unit displays an image of the first image data and information based on the parameter value in a superimposed or parallel manner.
11. The medical imaging apparatus according to claim 1, further comprising a region of interest (ROI) setting unit that sets an ROI in image data of the inspection target, wherein the image processor processes image data in a region set by the ROI setting unit.
12. The medical imaging apparatus according to claim 1, wherein the imager is an MR imager that measures a magnetic resonance signal of an inspection target and acquires k-space data including the magnetic resonance signal, an ultrasonic imager that acquires an ultrasonic signal of an inspection target, or a CT imager that acquires an X-ray signal transmitting an inspection target.
13. The medical imaging apparatus according to claim 1, wherein learned models used by the feature quantity extractor, the feature quantity abstractor, and the feature quantity converter are stored in a cloud connected to the imager via a network.
14. A medical imaging apparatus comprising: an imager that collects an image signal of an inspection target; and an image processor that generates first image data from the image signal and performs image processing of the first image data, wherein the image processor includes a feature quantity extractor that extracts a first feature quantity from the first image data, a feature quantity abstractor that uses a plurality of first feature quantities to extract a second feature quantity abstracted from the first feature quantities, a feature quantity converter that converts the second feature quantity into a third feature quantity extracted by second image data different in type from the first image data, an identifier that uses the third feature quantity converted by the feature quantity converter to calculate a predetermined parameter value capable of being determined from the second image data, wherein the feature quantity extractor includes a predictive model learned using the first image data acquired from a plurality of inspection targets, the feature quantity abstractor includes a predictive model learned by combining the plurality of first feature quantities, the feature quantity converter includes a feature quantity conversion model learned using a plurality of combinations of the second feature quantity and the third feature quantity, the identifier includes an identification model learned using a plurality of combinations of the third feature quantity and the parameter value, and wherein the feature quantity conversion model includes a model learned so that an error of a distance between feature quantities contributing more to identification accuracy in the second feature quantity and the third feature quantity mapped on a predetermined space is reduced and an error between an output of a parameter value calculated by the identifier and teacher data is reduced by an error back propagation method using a predetermined error function.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
(9)
(10)
(11)
(12)
(13)
(14)
(15)
(16)
(17)
(18)
(19)
(20)
(21)
(22)
(23)
(24)
(25)
DETAILED DESCRIPTION OF THE INVENTION
(26) The invention can be applied to various medical imaging apparatuses including an imaging unit that acquires a medical image and an image processing unit, such as an MRI apparatus, a CT apparatus, and an ultrasonic imaging apparatus. First, embodiments having configurations common to each modality will be described.
First Embodiment
(27) As illustrated in
(28) The imaging unit 100, which has a different configuration depending on the modality, acquires an image signal by measuring the subject and passes the acquired image signal to the image processing unit 200. The detailed configuration for each modality will be described in an embodiment described later.
(29) The image processing unit 200 includes an image reconstructing unit 210 that reconstructs an image (first image) from the image signal received from the imaging unit 100 and a diagnosis support processing unit 230 that performs a process for supporting image diagnosis using image data created by the image reconstructing unit 210. The image processing unit 200 may further include a correction processing unit 200 that performs a predetermined correction process such as noise processing on the image data (including creating a new image by another inter-image calculation) before inputting the image data created by the image reconstructing unit 210 to the diagnosis support processing unit 230, and
(30) As illustrated in
(31) The feature quantity A which is an output of the feature quantity extraction unit 232 is a feature quantity extracted from image data of an image (hereinafter referred to as an input image) obtained from an image signal acquired by the imaging unit 100, and is, for example, an output result of an intermediate layer in which brightness information of a lesion part is learned by the DL. The feature quantity B output by the feature quantity abstraction unit 233 is a result of learning by integrating the feature quantity A obtained from the brightness information of each lesion part and extracting a particularly important feature quantity component therefrom.
(32) The feature quantity C output from the feature quantity conversion unit 234 is a feature quantity extracted from the image data of the second image different from the medical image (first image) obtained from the medical imaging apparatus. The second image is an image having more detailed information than that of the first image data in order to identify a lesion, and is, for example, a pathological image and an output result of the intermediate layer in which the DL learns information (feature) in the pathological image of the same part as that of the input image. For example, parameters calculated by the identification unit 240 from the feature quantity C are the presence or absence of a tumor diagnosed from a pathological image, a grade thereof, malignancy of a disease, etc.
(33) The diagnosis support processing unit 230 does not normally use the image data without change as an input image of the feature quantity extraction unit 232, and divides the image data into patches of a predetermined size and performs processing for each patch. In such a case, a patch processing unit 231 that cuts out one or more patches from the image data received from the correction processing unit 220 is further included. As illustrated in
(34) Data and programs required for processing of the image processing unit 200 are stored in the storage device 130. The data necessary for the processing of the image processing unit 200 is the data used for processing performed by the image reconstructing unit 210, the correction processing unit 220, and the diagnosis support processing unit 230, and as for the diagnosis support processing unit 230, the data and programs are, for example, each learning model, etc. described later, which is used for processing performed by the feature quantity extraction unit 232, the feature quantity abstraction unit 233, the feature quantity conversion unit 234, and the identification unit 235. The storage device 130 may be a server device of a workstation or picture archiving and communication systems (PACS) communicatively connected to the medical imaging apparatus 10 via a network, or may be a portable storage medium connectable to the medical imaging apparatus 10. In addition, instead of the storage device 130, a cloud connected to the imaging unit 100 via a network may be used as a mechanism for storing each piece of data.
(35) When the medical imaging apparatus 10 includes a CPU and a GPU as a calculation unit and a controller, a function of the image processing unit 200 is realized as software installed in the CPU or the GPU. In particular, the feature quantity extraction unit 232, the feature quantity abstraction unit 233, the feature quantity conversion unit 234, and the identification unit 235 are realized by a neural network having a learning function, and a publicly known software package such as the CNN can be used. In addition, some functions of the image processing unit 200 can be realized by hardware such as an application specific integrated circuit (ASIC) and a field programmable gate array (FPGA).
(36) Hereinafter, a description will be given of a specific configuration of the diagnosis support processing unit 230 of the image processing unit 200 of
(37) [Structure of Learning Model]
(38) The learning model of the present embodiment has four types of learning models used by the feature quantity extraction unit 232, the feature quantity abstraction unit 233, the feature quantity conversion unit 234, and the identification unit 235, respectively, and a CNN is used for each learning model.
(39) A first model is a predictive model in which the feature quantity extraction unit 232 extracts the feature quantity A from image data of an input image, a second model is a model for the feature quantity abstraction unit 233 to extract the feature quantity B abstracted from the feature quantity A, a third model is a feature quantity conversion model for the feature quantity conversion unit 234 to convert the feature quantity B into the feature quantity C, and a fourth model in which the identification unit 235 calculates a predetermined parameter value from the feature quantity C and performs a prediction. Furthermore, even though a predictive model for separately obtaining the feature quantity C that is the output of the feature quantity conversion unit 234 and a feature quantity extracted from a different image from the input image is required, since this predictive model is the same as the model for extracting the feature quantities A and B, except that the input images are different, a redundant description is omitted. Note that even though each of the feature quantity extraction unit 232, the feature quantity abstraction unit 233, the feature quantity conversion unit 234, and the identification unit 235 uses a learned model (predictive model), a learning process of the learning model may be performed by the diagnosis support processing unit 230, or may be performed by another arithmetic unit (not illustrated) and stored in the storage device 130.
(40) First, a first predictive model will be described. This predictive model 232M is a model learned using a combination of an input image and a label such as the presence or absence (benign or malignant) of a lesion or a grade of lesion malignancy as learning data.
(41) As schematically illustrated in
(42) Learning is performed until an error between an output and teacher data falls within a predetermined range. An error function used at this time will be described after the structure of the learning model.
(43) Note that in
(44) The feature quantity A, which is the output of the predictive model 232M, expresses a plurality of classifications necessary for diagnosis of a feature of an image as a vector of a plurality of dimensions (for example, 1,024 dimensions), and a feature related to a parameter (for example, whether a tumor is benign or malignant) is extracted. Such a feature quantity A is obtained for each patch. Note that in
(45) With regard to a configuration of a CNN network, a typical architecture (AlexNet, VGG-16, VGG-19, etc.) may be used, or a model obtained by pre-learning the architecture using an ImageNet database, etc. may be used.
(46) Next, a description will be given of a second model used in the feature quantity abstraction unit 233, which is a predictive model 233M for extracting the feature quantity B abstracted from the feature quantity A.
(47) The predictive model 233M receives a feature quantity corresponding to the number of patches output from the feature quantity extraction unit 232 as an input, and extracts a main feature quantity that contributes to the presence or absence of a lesion (benign or malignant) or a grade of lesion malignancy. For example, when the number of patches is 200 and the feature quantity is 1,024 dimensions, a feature quantity obtained by connecting feature quantities of 1,024 dimensions×200 is input to this model, and a feature quantity B 420 that contributes most to the presence or absence of lesion (benign or malignant) is finally extracted. The dimension of the output feature quantity B is the same as the dimension of one patch (for example, 1,024 dimensions).
(48)
(49) The CNN is trained so that a feature quantity that most contributes to the parameter is output, and is used as the predictive model 233M of the feature quantity abstraction unit 233.
(50) Next, a description will be given of a third model used in the feature quantity conversion unit 234, which is a feature quantity conversion model 234M for converting the feature quantity B into the feature quantity C.
(51) As illustrated in
(52) The feature quantity C for learning used in the feature quantity conversion model 234M is extracted from the learning pathological image by a CNN. For example, as illustrated in
(53) A process of obtaining the learning feature quantity C using such a CNN may be performed as a process in the image processing unit 200 (diagnosis support processing unit 230), or may be performed by an arithmetic unit different from the image processing unit 200. In the case of performing using the image processing unit 200, a second image processing unit is added to the configuration of
(54) When the feature quantity B is input to the encoder 60A of
(55) Note that even though
(56) For example, as illustrated in
(57) TABLE-US-00001 TABLE 1 Example of content of each treatment Output ch Activation (deepness of Convolution Classification Network function feature map) filter size Treatment Encoder Stage 1 ReLu 16 3 Convolution(1D) Downsampling MaxPooling(1D) Stage 2 ReLu 8 3 Convolution(1D) Downsampling MaxPooling(1D) Stage 3 ReLu 8 3 Convolution(1D) Downsampling MaxPooling(1D) Decoder Stage 1 ReLu 8 3 Convolution(1D) Upsampling UpSampling(1D) Stage 2 ReLu 8 3 Convolution(1D) Upsampling UpSampling(1D) Stage 3 ReLu 16 3 Convolution(1D) Upsampling UpSampling(1D) Output Sigmoid 1 3 Convolution(1D)
(58)
(59) Next, a description will be given of a fourth identification model 235M used in the identification unit 235. This model calculates a predetermined parameter value from a feature quantity after conversion, and predicts the presence or absence of a lesion site, malignancy, etc. represented by the parameter value.
(60) For example, as illustrated in
(61) The identification model 235M is incorporated in the identification unit 235 such that such a CNN is trained using a plurality of combinations of the feature quantity (feature quantity C) after conversion and a grade 9 of tumor malignancy as learning data and a grade closest to a grade classified from the feature quantity C is extracted when the feature quantity C is input to the identification unit 235. In the example illustrated in
(62) [Design of Error Function]
(63) Next, a description will be given of an error function used when the predictive model or the identification model described above is created by learning of the CNN. The error function is used to evaluate a difference between an output and teacher data when the CNN is trained. The error function is generally based on an error propagation method represented by Formula (1).
(64)
t.sub.k: Teacher data
y.sub.k: Network output data
(65) Even though the error function of Formula (1) can be used in the present embodiment, any of the following error functions or a combination thereof can be used, which can improve the accuracy of the predictive model.
(66) 1. Predetermined spatial distance error
(67) 2. Identification model error
(68) 3. Medical knowledge incorporated error
(69) Hereinafter, these error functions will be described.
(70) 1. Predetermined Spatial Distance Error
(71) When data of the feature quantity A and data of the feature quantity B for learning are set to an input (teacher data) A.sub.k and an output B.sub.k, respectively, each of the teacher data A.sub.k and the output B.sub.k is dimensionally transformed and compressed and mapped to a predetermined space ε as illustrated in
(72) By adding a distance r between the teacher data A.sub.k and the output B.sub.k on the space ε (for example, between the centers of gravity of the respective data sets) to the error function of Formula (1), an error function is set so that an error of the distance r on the space ε becomes small. For example, when a conversion function to the space ε is set to g and the center of gravity (average value of coordinates of each piece of data) on the space ε is represented by C, the error function is represented by the following Formula (2).
(73)
(74) The feature quantity abstraction unit 233 and the feature quantity conversion unit 234 carry out learning by an error back propagation method using Formula (2) as an error function.
(75) 2. Identification Model Error
(76) As illustrated in
(77) In this method, first, a loss function is set using a difference between an output (probability score: Softmax layer output (0-1)) for each identification class in the identification unit 235 and teacher data as a loss value. When the number of classes of the output of the identification result is three as illustrated in
(78)
(79) Meanwhile, a teacher data vector (Y0.sub.L1, Y0.sub.L1, Y0.sub.L2) has values represented by the following Formula (4).
(80)
(81) A vector error between the output vector and the teacher data vector can be defined as an error function such as the following Formula (5).
[Equation 5]
E3=−Σ.sub.k=L0.sup.L2yo.sub.k log y.sub.k (5)
(82) When the values of the output vector and the teacher data vector are used, a value of Formula (5) becomes
E3=−(1×log 0.6+0×log 0.2+0×log 0.2)=
−(−0.22)=
0.22
(83) 3. Medical Knowledge Incorporated Error
(84) This error function is a combination of the above-mentioned predetermined spatial distance error and medical knowledge. The predetermined spatial distance error defines an error function that brings the entire space closer, using a center of gravity of a feature quantity space as a parameter. In this error function, a space to be matched is weighted based on medical knowledge and importance. Specifically, as illustrated in
(85) In the feature quantity map illustrated in
(86) For example, in
(87)
(88) Here, α, β, and γ are weighting factors, for example, α=0.5, β=0.4, and γ=0.1.
(89) By using the error function as described above, it is possible to reduce the error of the feature quantity conversion model or the identification model and realize a more accurate predictive model. Alternatively, the error functions (2) and (5) may be combined and weighted to form an error function represented by the following Formula (7).
[Equation 7]
E5=w1*E2+w2*E3 (7)
(90) Here, w1 and w2 are weighting factors (for example, w1=0.5, w2=0.5). Similarly, (5) and (6) may be combined.
(91) The four models learned as described above are predictive models or identification models used in the diagnosis support processing unit 230. These four models can be incorporated in the diagnosis support processing unit 230 as one combined model, and in this case, each learned model portion of the combined model corresponds to each unit included in the diagnosis support processing unit.
(92) [Image Processing Operation]
(93) Next, a description will be given of a flow of operation of the image processing unit 200 in which the learned predictive model described above is incorporated with reference to
(94) Upon receiving an image signal from the imaging unit 100, the image processing unit 200 first prepares an input image to be processed by the diagnosis support processing unit 230. Specifically, the image reconstructing unit 210 generates image data of the input image from the image signal, the correction processing unit 220 corrects the image using the generated image data as necessary, and the corrected image data is passed to the diagnosis support processing unit 230 (S1). Further, the correction processing unit 220 sends the corrected image data to the output unit 120.
(95) Subsequently, the patch processing unit 231 cuts out all the image data to be processed into patches of a predetermined size (
(96) Subsequently, the feature quantity conversion unit 234 uses the feature quantity conversion model 234M (
(97) Through the above operation, as illustrated in
(98) A method of displaying the parameter value in the output unit 120 is not limited to a specific method as long as a user of the medical imaging apparatus 10 can recognize the parameter value, and examples thereof include a method of displaying a mark, a numerical value, an image, etc.
(99) When the parameter is the malignancy of the tumor, it is possible to form the image 1702 by superimposing a mark according to the malignancy on a site of the tumor in the image 1701. For example, in the image 1702 illustrated in
(100) As described above, according to the present embodiment, the input image may be generated from the signal collected by the imaging unit 100, and the feature quantity A and the feature quantity B extracted from the input image can be converted into the feature quantity C of the image having more detailed information to calculate the parameter value used for more accurate diagnosis from the feature quantity C. In this way, it is possible to present more accurate diagnosis support information using the medical imaging apparatus. More specifically, the disease can be predicted based on the feature of the pathological image by only inputting the image acquired by the medical imaging apparatus such as the MRI image, and the information collection cost can be reduced.
(101) Further, in the present embodiment, since a relationship between feature quantities of different images is learned, for example, it is possible to medically show which part of an image of the medical imaging apparatus is used for determining a feature obtained in a pathological image. Thus, it is possible to more accurately make determination of the user on a diagnosis result. In other words, it is possible to allow the user to notice a feature that is generally difficult to see in the image of the medical imaging apparatus and may be overlooked.
First Modification of First Embodiment
(102) In the first embodiment, the patches are cut out from the image data under the condition that the respective patches do not overlap each other. However, the patch processing unit 231 may cut out a patch 400P so that adjacent patches overlap each other as illustrated in
(103) When the feature quantity C is extracted from the second image 700, a patch 700P may be cut out so as to have an overlap as illustrated in
Second Modification of First Embodiment
(104) All the patches cut out from the image data by the patch processing unit 231 may be processed. However, only an image in an ROI may be processed.
(105) In this case, for example, it is possible to cause the output unit 120 to display a UI (ROI setting unit 140), etc. illustrated in
(106) As described above, according to the present modification, by omitting image processing of the part outside the ROI, it is possible to reduce the processing time as a whole.
Third Modification of First Embodiment
(107) In the first embodiment, an example in which a parameter (for example, tumor malignancy grade) is calculated from an input image has been described, but a type of parameter that can be output by the image processing unit is not limited to one type. For example, it is possible to store, in the storage device 130, a plurality of patterns of learning models such as a learning model according to an examination site of the subject such as breast cancer or gastric cancer, or a learning model according to various diseases other than the tumor. In this case, when the user inputs a diagnosis site, a disease name to be diagnosed, etc. from the input unit 110, a learning model used by the image processing unit 200 for processing is selected according to the input content, and a parameter is calculated using the selected learning model.
Second Embodiment
(108) In the first embodiment, in extraction of the feature quantity B and the feature quantity C, each feature quantity is extracted from one type of image information. However, the present embodiment is different in that a feature quantity abstracted by combining feature quantities of a plurality of types of images is extracted. A difference between the process of the first embodiment and the process of the present embodiment will be described with reference to
(109) In the process of the first embodiment, as illustrated in
(110) On the other hand, in the present embodiment, as illustrated in
(111) The feature quantity abstraction unit 233 inputs the feature quantity (the number of images×the number of patches) obtained by fusing the feature quantities A1 to A4 output from each feature quantity extraction unit 232, and outputs one feature quantity B. The fusion of the feature quantities A1 to A4 may be a simple combination thereof or addition may be performed. In this way, by inputting more information to the predictive model 233M of the feature quantity abstraction unit 233, it is possible to obtain a more reliable feature quantity B that is more effective for diagnosis.
(112) A process after obtaining the feature quantity B is similar to that in the first embodiment. However, when obtaining the feature quantity C, a plurality of images may be used as another image. For example, the feature quantity C is extracted by adding another stained image such as IHC stain in addition to an HE stained image of the pathological image. In this way, with respect to the second image, the feature quantity C in which the feature of the lesion, that is the diagnosis target, is appropriately extracted can be obtained. As a result, the reliability of the parameter, which is the processing result of the diagnosis support processing unit 230, can be improved.
(113) Note that even though
Embodiment of Image Processing Apparatus
(114)
(115) The image processing apparatus 20 is a medical image processing apparatus in which the function of the diagnosis support processing unit 230 among the functions of the image processing unit 200 illustrated in
(116) The image processing apparatus 20 receives the image data acquired by each medical imaging apparatus 10, and performs processing by each unit of the diagnosis support processing unit 230 illustrated in
(117) The operation of the diagnosis support processing unit 230 of the image processing apparatus 20 is similar to that of each of the above-described embodiments or the modifications thereof. In this operation, the image data sent from the medical imaging apparatus 10 is subjected to processing of extraction and abstraction of the feature quantity and feature quantity conversion, and finally a parameter that serves as a diagnosis support is calculated by processing using the identification model. A processing result of the diagnosis support processing unit 230 may be output to the output unit 120 provided in the image processing apparatus 20, or may be sent to the medical imaging apparatus to which the image data is sent, a facility in which the medical imaging apparatus is placed, a database in another medical institution, etc.
(118) Further, the conversion of the feature quantity in the feature quantity conversion unit 234 is not limited to two captured images, and can be applied to a plurality of different types of captured images. For example, in the case of using images of the imaging apparatuses 10A, 10B, and 10C, a relationship between the feature quantities of the images obtained from the respective imaging apparatuses are mutually learned, and then it is possible to perform mutual conversion from the feature quantity of the image of the imaging apparatus 10A necessary for diagnosis to the feature quantity of the image of the imaging apparatus 10B or the feature quantity of the image of the imaging apparatus 10C, etc. In other words, since it is possible to convert a feature quantity of an image of one imaging apparatus into feature quantities of a plurality of different imaging apparatuses, it is possible to perform highly accurate image diagnosis while suppressing the information collection cost in one examination.
(119) In the first embodiment, a description has been given of an embodiment and a modification thereof that can be applied regardless of the type of imaging unit. An embodiment for each modality will be described below.
Third Embodiment
(120) An embodiment in which the invention is applied to the MRI apparatus will be described.
(121) As illustrated in
(122) The MR imaging unit 100B has the same configuration as a conventional MRI apparatus, measures a magnetic resonance signal of an inspection target, and acquires k-space data including the magnetic resonance signal. Specifically, the MR imaging unit 100B includes a static magnetic field generation unit 102 that generates a static magnetic field, a gradient magnetic field generation unit 103 including a gradient magnetic field coil 109 that generates a gradient magnetic field in three axis directions in a static magnetic field space, a transmitter 104 including a transmission coil 114a for applying a high frequency magnetic field to a subject 101 in the static magnetic field space, a receiver 105 including a reception coil 114b for receiving a nuclear magnetic resonance signal generated from the subject 101, and a sequencer 107 for controlling operations of the gradient magnetic field generation unit 103, the transmitter 104, and the receiver 105 according to a predetermined pulse sequence.
(123) The gradient magnetic field generation unit 103 is provided with a gradient magnetic field power supply 106 for driving the gradient magnetic field coil 109, and the transmitter 104 is provided with a high-frequency generator 111 that applies a predetermined high-frequency signal to the transmission coil 114a and irradiates an electromagnetic wave having a nuclear magnetic resonance frequency from the transmission coil 114a, an amplifier 113, a modulator 112, etc. In addition, the receiver 105 includes an amplifier 115 for amplifying a signal detected by the reception coil 114b, a quadrature phase detector 116, an A/D converter 117 for conversion into a digital signal, etc.
(124) The signal processing unit 150B includes an image processing unit 200B that performs a similar process to that of the image processing unit 200 of the first embodiment using a nuclear magnetic resonance signal (k-space data) acquired by the MR imaging unit 100B, an input unit 110 for inputting necessary commands and information to each unit, an output unit 120 for displaying a created image and UI, and a storage device 130 that stores the nuclear magnetic resonance signal acquired by the MR imaging unit 100B, data in a process of calculation, and numerical values such as parameters necessary for calculation.
(125) A function of the signal processing unit 150 is implemented by software installed in the memory and the CPU or GPU. However, a part thereof may be configured by hardware.
(126) A configuration and function of the image processing unit 200B are similar to those of the image processing unit 200 of the first embodiment. Referring to
(127) For the feature quantity extraction unit 232 of the present embodiment, a learned predictive model (
(128) Upon imaging, the MR imaging unit 100B collects k-space data by an arbitrary imaging method and transmits the k-space data to the image processing unit 200B. The image processing unit 200B performs similar processing to that in the first embodiment. First, the image reconstructing unit 210 generates image data of an MR image in the real space from the k-space data, and the correction processing unit 220 performs correction processing on the generated MR image and inputs the MR image to the diagnosis support processing unit 230. The patch processing unit 231 performs patch processing on the input MR image, and the feature quantity extraction unit 232 extracts the feature quantity A for each patch from image data of the MR image for each patch. The feature quantity abstraction unit 233 converts the feature quantity A into a more abstract feature quantity B. The feature quantity conversion unit 234 further converts this feature quantity B into a feature quantity C extracted from another image (pathological image, etc.), and the identification unit 235 calculates a parameter value from the feature quantity C, integrates the patches into an MR image, and outputs the parameter value and MR image data to the output unit 120.
(129) In the present embodiment, the modification of the first embodiment may be applied to perform the above-described processing of the image processing unit 200B (diagnosis support processing unit 230) only on a desired region (ROI) of the MR image, or cut out the patches by overlapping. Further, by applying the second embodiment, a plurality of MR images acquired by a plurality of imaging methods may be passed to the image processing unit 200B to predict a diagnostic parameter. At this time, additional text information may be input to the diagnosis support processing unit 230.
(130) According to the medical imaging apparatus (MRI apparatus) of the present embodiment, a parameter value used for highly accurate diagnosis can be calculated from an input image (MR image) of a subject, and thus an image showing a highly accurate diagnosis result can be obtained without performing a detailed examination other than the diagnosis using the medical imaging apparatus. In this way, when the MRI apparatus of the present embodiment is used, for example, a diagnosis equivalent to a pathological diagnosis can be performed without performing a pathological examination, and thus it is possible to perform a highly accurate diagnosis while reducing a physical burden on a patient.
Fourth Embodiment
(131) A description will be given of an embodiment in which the invention is applied to the ultrasonic imaging apparatus.
(132)
(133) The ultrasonic imaging unit 100C has a similar configuration to that of a conventional ultrasonic imaging apparatus, and includes an ultrasonic probe 901 that transmits ultrasonic waves to a subject 900, a transmitter 902 that transmits an ultrasonic wave drive signal to the probe 901, an ultrasonic wave receiver 903 that receives an ultrasonic wave signal (RF signal) from the probe 901, a phasing addition unit 905 that performs phasing addition (beamforming) on a signal received by the ultrasonic wave receiver 903, and an ultrasonic wave transmission and reception controller 904 that controls the ultrasonic wave transmitter 902 and the ultrasonic wave receiver 903.
(134) The signal processing unit 150C includes an image processing unit 200C that generates an ultrasonic image from the ultrasonic signal acquired by the imaging unit 100C and performs similar processing to that of the image processing unit 200 of the first embodiment, the input unit 110, the output unit 120, and the storage device 130. The signal processing unit 150C may further include a Doppler processing unit (not illustrated). In the illustrated configuration example, the ultrasonic wave transmission and reception controller 904 and the image processing unit 200C are built in one CPU. However, the ultrasonic wave transmission and reception controller 904 may be built in a CPU different from the image processing unit 200C, or may be a combination of hardware such as a transceiver circuit and control software.
(135) A configuration and function of the image processing unit 200C are similar to those of the image processing unit 200 of the first embodiment, and the diagnosis support processing unit 230 thereof has a similar configuration to that illustrated in
(136) A model used by the feature quantity extraction unit 232, the feature quantity abstraction unit 233, the feature quantity conversion unit 234, and the identification unit 235 of the present embodiment is similar to that of the third embodiment except that an image input to the diagnosis support processing unit 230 is not an MR image and is an ultrasonic image acquired as follows.
(137) In imaging, ultrasonic waves received by the probe 901 are phased and added in the ultrasonic imaging unit 100C, and an ultrasonic signal is transmitted to the image processing unit 200C. In the image processing unit 200C, the image reconstructing unit 210 first generates an ultrasonic image from the ultrasonic signal, and the correction processing unit 220 corrects the generated ultrasonic image and inputs the ultrasonic image to the diagnosis support processing unit 230. In the diagnosis support processing unit 230, the patch processing unit 210 performs patch processing on the input ultrasonic image, and the feature quantity extraction unit 232 extracts the feature quantity A for each patch from image data of the ultrasonic image. The feature quantity abstraction unit 233 extracts the abstracted feature quantity B obtained by fusing the feature quantity A for each patch. The feature quantity conversion unit 234 converts the feature quantity B into the feature quantity C. The identification unit 235 calculates a parameter value associated with a feature of the pathological image from the feature quantity C, and outputs the parameter value to the output unit 120. The output unit 120 outputs the parameter value and CT image data output from the diagnosis support processing unit 230 in a predetermined display mode.
(138) In the present embodiment, the modification described in the first embodiment and the second embodiment can be applied as appropriate.
(139) According to the ultrasonic imaging apparatus of the present embodiment, since it is possible to calculate a parameter value used for highly accurate diagnosis from an ultrasonic image, it is possible to obtain a highly accurate diagnostic result without performing a detailed examination other than the diagnosis using the ultrasonic imaging apparatus.
Fifth Embodiment
(140) A description will be given of an embodiment in which the invention is applied to the CT apparatus.
(141)
(142) The CT imaging unit 100D has a similar configuration to that of a conventional CT apparatus, and includes an X-ray source 801 that irradiates a subject 800 with X-rays, a collimator 803 that limits an X-ray emission range, an X-ray detector 806 that detects transmitted X-rays transmitting the subject 800, a rotating plate 802 having an opening 804 at a center to support the X-ray source 801 and the X-ray detector 806 at opposite positions, a bed 805 for mounting the subject 800 in a space inside the opening 804, a data collection unit 807 that collects an output of the X-ray detector 806 for each piece of projection data, and a system controller 808 that controls an operation of each element included in the CT imaging unit 100D.
(143) The signal processing unit 150D includes an image processing unit 200D that performs similar processing to that of the image processing unit 200 of the first embodiment on a tomographic image (CT image) generated by the imaging unit 100D, the input unit 110, the output unit 120, and the storage device 130. Further, in the illustrated configuration example, the system controller 808 and the image processing unit 200D are built in one CPU. However, the system controller 808 may be built in a CPU different from the image processing unit 200D, or may be a combination of hardware and control software. Similarly, some of functions of the signal processing unit 150D can be configured by hardware.
(144) A configuration and function of the image processing unit 200D are similar to those of the image processing unit 200 of the first embodiment, and the diagnosis support processing unit 230 thereof has a similar configuration to that illustrated in
(145) A model used by the feature quantity extraction unit 232, the feature quantity abstraction unit 233, the feature quantity conversion unit 234, and the identification unit 235 of the present embodiment is similar to that of the third embodiment except that an image input to the diagnosis support processing unit 230 is not an MR image and is a CT image acquired as follows.
(146) In imaging, the data collection unit 807 collects an X-ray signal of transmitted X-rays detected by the X-ray detector 806 in the CT imaging unit 100D, and transmits the X-ray signal to the image processing unit 200D. In the image processing unit 200D, the image reconstructing unit 210 first generates a CT image, and the correction processing unit 220 corrects the generated CT image and inputs the CT image to the diagnosis support processing unit 230. The patch processing unit 231 performs patch processing on the input CT image, and the feature quantity extraction unit 232 extracts the feature quantity A for each patch from the CT image. The feature quantity abstraction unit 233 integrates the feature quantity A of each patch and converts the feature quantity A into the abstracted feature quantity B. The conversion unit 233 converts the feature quantity B into the feature quantity C that is a feature of the pathological image. The identification unit 235 calculates a parameter value from the feature quantity C, and outputs the parameter value to the output unit 120. The output unit 120 outputs the parameter value and CT image data output from the diagnosis support processing unit 230 in a predetermined display mode.
(147) In the present embodiment, the modification described in the first embodiment and the second embodiment can be applied as appropriate.
(148) According to the CT apparatus of the present embodiment, since it is possible to calculate a parameter value used for highly accurate diagnosis from a CT image, it is possible to obtain a highly accurate diagnostic result without performing a detailed examination other than the diagnosis using the CT apparatus.