Brain-function image data augmentation method

11369268 · 2022-06-28

Assignee

Inventors

Cpc classification

International classification

Abstract

A brain-function image data augmentation method includes: a step of providing a target database including a plurality of image-data information; a step of, based on a plurality of image-data expected information in an expectation-value database, calculating a ratio of the plurality of image-data expected information with respect to different ages; a step of, based on the plurality of image-data information and the ratio, obtaining image-data ratio information with respect to an estimated age; a step of establishing a relationship for each pair of the image-data information and the image-data expected information; a step of, based on the relationship, the ratio and the image-data information, calculating an estimated image-data information with respect to the estimated age; and, a step of combining linearly the estimated image-data information and the image-data ratio information so as to generate an augmented image-data information with respect to the estimated age.

Claims

1. A brain-function image data augmentation method, comprising the steps of: (a) providing a target database, the target database including a plurality of image-data information wherein the image data information being at least one nuclear-image brain function image data with respect to different ages; (b) based on a plurality of image-data expected information in an expectation-value database, calculating a ratio of the plurality of image expected information with respect to the different ages; (c) based on the plurality of image-data information and the ratio, obtaining image-data ratio information with respect to an estimated age; (d) establishing a relationship for each pair of the image-data information and the image-data expected information; (e) based on the relationship, the ratio and the image-data information, calculating an estimated image-data information with respect to the estimated age; and (f) combining linearly the estimated image-data information and the image-data ratio information so as to generate an augmented image-data information with respect to the estimated age.

2. The brain-function image data augmentation method of claim 1, wherein the step (f) includes the steps of: (f1) based on a first weighting value and a second weighting value to linearly combine the estimated image-data information and the image-data ratio information; and (f2) adjusting the first weighting value and the second weighting value to generate different augmented image-data information.

3. The brain-function image data augmentation method of claim 1, wherein the step (b) includes a step of providing the expectation-value database, the expectation-value database including the plurality of image-data expected information, each of the plurality of image-data expected information being a mean expected image data with respect to the different ages, the mean expected image data being a voxel value greater than zero.

4. The brain-function image data augmentation method of claim 1, wherein the step (d) includes the steps of: (d1) averaging voxel values of the plurality of image-data information to obtain an averaged image-data information; and (d2) based on a linear regression model, establishing the relationship between the averaged image-data information and the plurality of image-data expected information with respect to the estimated age.

5. The brain-function image data augmentation method of claim 1, wherein the step (a) includes a step of capturing the nuclear-medicine brain-function image data for the different ages.

6. The brain-function image data augmentation method of claim 1, wherein the step (e) includes the steps of: (e1) based on the relationship, obtaining an approximate image-data expected information by plugging each of the image-data information into the relationship; (e2) based on the ratio and the approximate image-data expected information, obtaining an estimated image-data expected information with respect to the estimated age; and (e3) based on the relationship and the estimated image-data expected information, obtaining the estimated image-data information.

Description

BRIEF DESCRIPTION OF THE DRAWINGS

(1) The present disclosure will become more fully understood from the detailed description given herein below and the accompanying drawings which are given by way of illustration only, and thus are not limitative of the present disclosure and wherein:

(2) FIG. 1 is a schematic view of an embodiment of the brain-function image data augmentation method in accordance with this disclosure.

DETAILED DESCRIPTION

(3) In the following detailed description, for purposes of explanation, numerous specific details are set forth in order to provide a thorough understanding of the disclosed embodiments. It will be apparent, however, that one or more embodiments may be practiced without these specific details. In other instances, well-known structures and devices are schematically shown in order to simplify the drawing.

(4) Referring now to FIG. 1, an embodiment of the brain-function image data augmentation method in accordance with this disclosure is schematically shown. In thus embodiment, the brain-function image data augmentation method S100 can be performed by hardware (such as a controller or an IC), software (such as program commands performed by the controller), or a combination of hardware and software. For example, a controller can be connected with a target database and an expectation-value database so as to have the program commands for the controller to perform the brain-function image data augmentation method S100 of this disclosure.

(5) It shall be explained that, while in performing the brain-function image data augmentation method S100, following steps can be included. Firstly, a standard normalization process shall be performed. To overcome inconsistency of nuclear medical images from different hospitals and different instruments, the standard normalization process shall be performed. For example, through a statistical parametric mapping (SPM) of a specific software program to perform normalization to perform normalization, or through prosthesis experiments to perform calibration, the concerned inconsistency across brands of instruments and hospitals can be resolved. Then, digital normalization is executed on the images. For example, an image mask can be firstly introduced to erase background noises of the brain-function image, then an appropriate blurring function is applied to have the image convoluted, and finally an average value can be obtained by dividing overall voxels of the entire image by all non-zero voxels.

(6) In this disclosure, the brain-function image data augmentation method S100 includes the following Step S110 to Step S160. In performing Step S110, a target database is provided, in which the target database includes a plurality of image-data information. In one embodiment of this disclosure, a plurality of or at least one nuclear-medicine brain-function image data for different ages is captured. Herein, f.sub.t(x) stands for the image-data information, in which f stands for the target database, x stands for all non-zero voxel positions, and t stands for a specific age or an age range. In this embodiment, the image-data information is at least one nuclear-medicine brain-function image data with respect to individual ages. For example, f.sub.76(x) stands for the nuclear-medicine brain-function image data with respect to a 76-year-old testee, or f.sub.74(x) stands for the nuclear-medicine brain-function image data with respect to a 74-year-old testee.

(7) Then, in performing Step S120, based on a plurality of image-data expected information in an expectation-value database, calculate a ratio of the image-data expected information with respect to different ages. In detail, following steps are included. Firstly, the expectation-value database is provided, in which the expectation-value database includes a plurality of image-data expected information. In one embodiment of this disclosure, g.sub.t(x) stands for the image-data expected information, in which g stands for the expectation-value database, x stands for all non-zero voxel positions, and t stands for a specific age or aging range. Each of the image-data expected information, as known information, is a mean expected image data with respect to a corresponding age or aging range, and the mean expected image data is a voxel value greater than zero. For example, g.sub.76˜85(x) stands for the mean expected image data with respect to the aging range from 76˜85 years old.

(8) In one embodiment of this disclosure, while in managing the expectation-value database, the t value is dependent on the number of the image-data information f.sub.t(x), and an interpolation manipulation is applied to obtain the g.sub.t(x) with respect to the t. For example, if the current expectation-value database includes expectation values for the aging ranges of 55˜64, 65˜74 and 75˜84, then the image-data information would be seen as g.sub.55˜64(x), g.sub.65˜74(x) and g.sub.75˜84(x), and g.sub.60(x), g.sub.61(x), g.sub.79(x), g.sub.60˜61(x), g.sub.62˜63(x) and so on can be obtained through interpolations. In addition, a ratio between any two image-data expected information with respect to different ages or aging ranges can be calculated. For example, different ages or aging ranges t can be used to calculate a ratio r.sub.t/t(x) of the g.sub.t(x) at each voxel x for different ages or aging ranges t, i.e., r.sub.t/t(x)=g.sub.t(x)/g.sub.t(x). For example, r.sub.74/70(x)=g.sub.74(x)/g.sub.70(x).

(9) Then, in performing Step S130, based on each of the image-data information and each of the ratios, individual image-data ratio information with respect to a specific estimated age or aging range can be obtained. For example, f-ratio.sub.t(x) stands for the image-data ratio information to be derived by dividing f.sub.t(x) by r.sub.t/t(x). Namely, f-ratio.sub.t(x)=f.sub.t(x)/r.sub.t/t(x). For example, with the f.sub.74(x) in the target database, i.e., the nuclear-medicine brain-function image data for the age of 74, then if the nuclear-medicine brain-function image data for the age of 70 is wanted from the f.sub.74(x), it can be obtained by f-ratio.sub.70(x)=f.sub.74(x)/r.sub.74/70(x).

(10) Then, in performing Step S140, a relationship for each pair of the image-data information and the image-data expected information can be established by the following calculations. The calculations include the steps of: averaging voxel values of the image-data information so as to obtain an averaged image-data information, then evaluating a machine learning algorithm to obtain the relationship between the averaged image-data information the image-data expected information with respect to the estimated age or aging range.

(11) For example, a linear regression of the machine learning algorithm is introduced to establish the relationship between f.sub.t(x) and g.sub.t(x). Namely, the relationship g.sub.t(x)=a.sub.t×avg(f.sub.t(x))+b.sub.t, in which avg(f.sub.t(x)) is an average value of f.sub.t(x) at voxel x for all ages and aging ranges t. Through the machine learning algorithm, a.sub.t and b.sub.t can be obtained. For example, with g.sub.70(x)=a.sub.70×avg(f.sub.70(x))+b.sub.70, a.sub.70 and b.sub.70 can be obtained.

(12) Then, in performing Step S150, based on the aforesaid relationship, the ratio and the image-data information, an estimated image-data information with respect to the estimated age or aging range can be obtained. In detail, Step S140 includes a first step of obtaining an approximate image-data expected information by plugging each of the image-data information into the relationship. For example, have f.sub.t(x) plugged into the equation at the end so as to obtain an image g′.sub.t(x) resembled to the g.sub.t(x): g′.sub.t(x)≈a.sub.t×f.sub.t(x)+b.sub.t, in which g′.sub.t(x) stands for the approximate image-data expected information. Then, based on the ratio and the approximate image-data expected information, an estimated image-data expected information with respect to the estimated age or aging range can be obtained. For example, use r.sub.t/t′(x) to transform g′.sub.t(x) according to g′e(x)=g′.sub.t(x)/r.sub.t/t′(x), in which g′.sub.t′(x) stands for the estimated image-data expected information. Then, based on the relationship and the estimated image-data expected information, the corresponding estimated image-data information can be obtained. For example, for the equation is g′.sub.t′(x)=a.sub.t′×f-guess.sub.t′(x)+b.sub.t′, by giving g′.sub.t′(x), a.sub.t′ and be, f-guess.sub.t′(x) can be calculated. Herein, f-guess.sub.t′(x) stands for a set of estimated image-data information for following image augmentation.

(13) For example, if only the f.sub.74(x) exists in the target database, i.e., having only the nuclear-medicine brain-function image data for the age of 74, then if the image-estimated f-guess.sub.70(x) is wanted for the age of 70, following calculations can be performed: g′.sub.74(x)≈a.sub.74×f.sub.74(x)+b.sub.74, g′.sub.70(x)=g′.sub.74(x)/r.sub.74/70(x), and f-guess.sub.70(x)=(g′.sub.70(x)−b.sub.70)/a.sub.70.

(14) Then, in performing Step 160, the estimated image-data information and the image-data ratio information are linearly combined so as to generate an augmented image-data information with respect to the estimated age or aging range. In linearly combining the estimated image-data information and the image-data ratio information, following steps are included: a step of based on a first weighting value and a second weighting value to linearly combine the estimated image-data information and the image-data ratio information; and, a step of adjusting the first weighting value and the second weighting value to generate different augmented image-data information.

(15) For example, in linearly combining the estimated image-data information and the image-data ratio information, following steps are performed. Firstly, combine f-guess.sub.70(x) and f-ratio.sub.70(x) in a weighting manner. u(x) and v(x) are adjustable per requirements, in which u(x) stands for the first weighting value, and v(x) stands for the second weighting value. Through varying u(x) and v(x), more image data can be generated. By proposing u(x)+v(x)=1, then f-new.sub.t′(x)=u(x)×f-guess.sub.t′(x)+v(x)×f-ratio.sub.t′(x), in which f-new.sub.t′(x) stands for the augmented image-data information. In one embodiment of this disclosure, u(x) is given by a constant 0.5, and v(x) is given also by a constant 0.5, then it can be seen as to generate a set of augmented image-data information approximating f.sub.70(x) according to f.sub.74(x).

(16) Herein, f-new.sub.70(x) is not a practical or real image f.sub.70(x) for the same 70-year-old healthy trainee, but different f-new.sub.70(x) obtained by varying u(x) and v(x) according to a statistic and mathematical estimation in a Rule-Based manner. In the brain-function image data augmentation method S100 provided by this disclosure, statistic variety is achieved by introducing u(x) and v(x), so that following deep learning can be performed smoothly without being affected by possible data imbalance. Empirically, even in a situation of a shortage of ⅔ healthy trainee data, the method can still provide accurate deep learning models. In a conventional image-training session with transfer learning, at least millions or thousands of image data are required for a successful estimation. Since major cost is spent on the target database for collecting health trainee data, thus ⅔ of the cost for the target database can be saved by implementing the method provide in this disclosure.

(17) In summary, in the brain-function image data augmentation method provided by this disclosure, the expectation-value database having given image-data expected information is introduced to expand the training data set of the deep learning.

(18) With respect to the above description then, it is to be realized that the optimum dimensional relationships for the parts of the disclosure, to include variations in size, materials, shape, form, function and manner of operation, assembly and use, are deemed readily apparent and obvious to one skilled in the art, and all equivalent relationships to those illustrated in the drawings and described in the specification are intended to be encompassed by the present disclosure.