Method for detecting image of esophageal cancer using hyperspectral imaging
11810300 · 2023-11-07
Assignee
Inventors
- Hsiang-Chen Wang (Chiayi, TW)
- Tsung-Yu Yang (Chiayi County, TW)
- Yu-Sheng Chi (Chiayi County, TW)
- TING-CHUN MEN (CHIAYI COUNTY, TW)
Cpc classification
G06V10/76
PHYSICS
G06T3/40
PHYSICS
G06V10/754
PHYSICS
International classification
G06T3/40
PHYSICS
G06V10/75
PHYSICS
Abstract
This application provides a method for detecting images of testing object using hyperspectral imaging. Firstly, obtaining a hyperspectral imaging information according to a reference image, hereby, obtaining corresponded hyperspectral image from an input image and obtaining corresponded feature values for operating Principal components analysis to simplify feature values. Then, obtaining feature images by Convolution kernel, and then positioning an image of an object under detected by a default box and a boundary box from the feature image. By Comparing with the esophageal cancer sample image, the image of the object under detected is classifying to an esophageal cancer image or a non-esophageal cancer image. Thus, detecting an input image from the image capturing device by the convolutional neural network to judge if the input image is the esophageal cancer image for helping the doctor to interpret the image of the object under detected.
Claims
1. A method for detecting images of testing object using hyperspectral imaging, comprising steps of: a computing device obtaining hyperspectral image information according to a reference image, the reference image including at least an object reference image and a background image; an image capture unit captures an inputted image and sends the inputted image to the host, the inputted image including at least a testing object image and the background image; the computing device converting the inputted image to obtain a hyperspectral image according to the hyperspectral image information; the computing device analyzing the hyperspectral image to obtain a plurality of first hyperspectral image vectors and first hyperspectral eigenvalues; the computing device performing a principal component analyzing process to the first hyperspectral eigenvalues to simplify the hyperspectral image and generate a plurality of corresponded second eigenvalues; the computing device performing at least a layer of convolution calculation on the corresponded second eigenvalues to filter out the background image and obtain a convolution result according to the plurality of convolution cores, for obtaining at least a selected image of the at least a testing object image according to the convolution result, wherein the convolution cores capture a plurality of selected eigenvalues and peripheral eigenvalues from the second eigenvalues after the filtering of the background image, the at least a testing object image includes a plurality of peripheral images and the at least a selected image, the plurality of peripheral images is adjacent to the at least a selected image, the at least a selected image corresponds to the selected eigenvalues, and the peripheral images correspond to the peripheral eigenvalues, the peripheral images surrounding the at least a selected image; the computing device generating at least a pre-set frame corresponding to an edge of the at least a selected image and on the inputted image according to the peripheral eigenvalues; the computing device generating a boundary frame on the inputted image and comparing a first center of the pre-set frame with a second center of the boundary frame to obtain a center offset between the pre-set frame and the boundary frame, wherein the boundary frame corresponds to an edge of the inputted image; the computing device performing a regression operation to obtain a regression operation result according to the center offset; the computing device aligning the testing object image according to the regression operation result and the pre-set frame wherein when the first center is moved toward the second center, the selected images are moved toward the second center; the computing device comparing the testing object image with at least a sample image to generate a comparing result; and the computing device judging if the inputted image is a target object image or not according to the comparing result.
2. The method for detecting the images of testing object using hyperspectral imaging of claim 1, in which in the step that the computing device performing the at least a layer of convolution calculation on the corresponded second eigenvalues to filter out the background image and obtain the convolution result according to the plurality of convolution cores, the computing device sets the convolution cores to m×n×p matrix and normalizes a plurality of pixel values of the inputted image to the normal pixel values, multiplies the normal pixel values by the convolution cores, and captures the second eigenvalues in a convolutional layer; where m=n, m is 1, 3, 5, 10, 19, or 38.
3. The method for detecting the images of testing object using hyperspectral imaging of claim 1, in which in the step that obtaining the at least a selected image of the at least a testing object image according to the convolution cores, the computing device obtains the entire or partial of an area where the testing object image is located according to the selected eigenvalues and obtains the at least a selected image from the second eigenvalue F2 corresponding to the inputted image.
4. The method for detecting the object images by hyperspectral image of claim 1, in which in the step that the computing device follows a plurality of convolution cores making the at least a layer of convolution calculation on the second eigenvalues, the computing device follows a single multi-frame target detector model to perform convolution on each pixel of the inputted image and detect the second eigenvalues.
5. The method for detecting the images of testing object using hyperspectral imaging of claim 1, in which in the step of the computing device performing a regression operation according to the center offset, the computing device uses a first position of the at least a pre-set frame, a second position of the boundary frame, and a zooming factor to perform the regression operation and position the at least a testing object image.
6. The method for detecting the images of testing object using hyperspectral imaging of claim 1, in which in the step that the computing device compares the testing object image with at least a sample image, the computing device performs categorization and comparison of the testing object image and the at least a same image on a fully connected layer.
7. The method for detecting the images of testing object using hyperspectral imaging of claim 1, in which in the step of determining that the inputted image is a target object image according to a comparison result, when the computing device cannot determine that the inputted image is a target object image according to the at least a sample image, the computing device follows the at least a sample image to make similarity comparison on the testing object image.
8. The method for detecting the images of testing object using hyperspectral imaging of claim 7, in which in the step that the computing device follows the at least a sample making similarity comparison on the testing object image, when the computing device judges that the similarity of the images of testing object is greater than a similarity threshold, the computing device categorizes the inputted image to the target object image, otherwise, the computing device categorizes the inputted image to the non-target object image.
9. The method for detecting the images of testing object using hyperspectral imaging of claim 1, in which the hyperspectral image information corresponds to a plurality of white light images and a plurality of narrowband images, which include a plurality of color matching functions, a correction matrix, and a conversion matrix.
10. The method for detecting the images of testing object using hyperspectral imaging of claim 1, in which in the step that the computing device follows the aligned testing object image to make the matching comparison between the testing object image and the at least a sample image, the computing device reads the at least a sample image from a database, following the aligned testing object image to perform the matching comparison with the testing object.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
(1)
(2)
(3)
(4)
(5)
(6)
(7)
(8)
DETAILED DESCRIPTION
(9) To enable the Review Committee members to have a deeper realization and understanding of the features and functions of this invention, we hereby put the embodiment and detailed explanation in below:
(10) Due to the fact of the negligence of manual operation or the difficulty of image recognition caused by the complicated operation of the conventional endoscope, this invention proposes a method in detecting object image with a hyperspectral image in the perspective of solving the problem of the negligence of manual operation or the image is not easy to recognize about in the conventional endoscope technology.
(11) In the following statements we will further explain the features provided by the method for detecting object images using hyperspectral imaging and the system with which this invention has disclosed:
(12) First, refer to
(13) Please refer to
(14) In step S05, as shown in
(15) Refer to
(16) Continuing the above, the first step of conversion should convert the reference image REF and the spectrometer to the same XYZ color space; the conversion equation of the reference image REF is as follows:
(17)
(18) Where
(19)
(20) ƒ(n) is a gamma function, T is the conversion matrix, and [M.sub.A] is the color adaptation matrix.
(21) The equation for converting the reflection spectrum data captured by the spectrometer to the XYZ color gamut space is as follows:
X=k∫.sub.380nm.sup.780nmS(λ)R(λ){tilde over (x)}(λ)dλ Equation (5)
Y=k∫.sub.380nm.sup.780nmS(λ)R(λ){tilde over (y)}(λ)dλ Equation (6)
Z=k∫.sub.380nm.sup.780nmS(λ)R(λ){tilde over (z)}(λ)dλ Equation (7)
Where k is shown in below Equation (8):
k=100.sub.380nm.sup.780nmS(λ){tilde over (y)}(λ)dλ Equation (9)
(22) {tilde over (x)}(λ), {tilde over (y)}(λ), {tilde over (z)}(λ) are the color matching functions, S(λ) is the spectrum of light source taken by the endoscope. Since the Y value of the XYZ color space is proportional to the brightness, use Equation (9) to obtain the maximum brightness Y of the light source spectrum; then, specify the upper limit of Y to be 100 and obtain the normative ratio k of brightness. Therefore, use equations (5) to (7) to obtain the XYZ value [XYZ.sub.Spectrum].
(23) In addition, the endoscope image can be further corrected through the correction matrix C of Equation (10):
[C]=[XYZ.sub.Spectrum]×nv([V]) Equation (10)
(24) The variable matrix [V] is obtained by analyzing the factors that may cause errors in the endoscope. The factors that cause the errors are the nonlinear response of the endoscope, the dark current of the endoscope, the inaccurate color separation of the color filter, and the color shift (for example, white balance), used to correct the XYZ value [XYZ.sub.Spectrum].
(25) Since the calculation results of the narrowband image and the white light image in the third-order operation are similar, the nonlinear response correction is performed by the third-order equation, and the nonlinear response of the endoscope is corrected by Equation (11):
V.sub.Non-linear=[X.sup.3 Y.sup.3 Z.sup.3 X.sup.2 Y.sup.2 Z.sup.2 X Y Z 1].sup.T Equation (11)
(26) Generally speaking, the dark current in the endoscope is a fixed value and won't change significantly with the change of light-in amount; the effect of dark current is regarded as a constant, and the correction variable of dark current is defined as VDark; use Equation (12) to correct the influence of dark current:
V.sub.Dark=[α] Equation (12)
(27) The correction variable for the inaccuracy of color separation and color shift of the filter is defined as V.sub.Color, {tilde over (x)}(λ), {tilde over (y)}(λ), {tilde over (z)}(λ) are the color matching functions from RGB color space to XYZ color space, Therefore, according to the correlation between {tilde over (x)}(λ), {tilde over (y)}(λ), {tilde over (z)}(λ), the possible permutations between X, Y, and Z are listed in the following Equation (13), used to correct the inaccurate color separation and color shift of the endoscopic image in the color filter:
V.sub.Color=[XYZ XY YZ XZ X Y Z].sup.T Equation (13)
(28) From the above Equation (11) to Equation (13), the correction variable matrix V shown in Equation (14) is derived:
V=[X.sup.3Y.sup.3Z.sup.3 X.sup.2Y X.sup.2Z Y.sup.2Z XZ.sup.2 YZ.sup.2 XYZX.sup.2Y.sup.2 Z.sup.2 XY YZ XZ X Y Zα].sup.T Equation (14)
(29) By combining the above variable matrix V with the correction matrix C, the values of corrected X, Y, and Z [XYZ.sub.Correct] are obtained, as shown in Equation (15) below:
[XYZ.sub.Correct]=[C]×[V] Equation (15)
(30) The average error of the white light image between [XYZ.sub.Correct] and [XYZ.sub.Spectrum] is 1.40, and the average error of the narrow hand image between [XYZ.sub.Correct] and [XYZ.sub.Spectrum] is 2.39.
(31) Since the above calculation uses the visible light wavelength range of 380 nm to 780 nm, the correction result of endoscope must be expressed in color difference, where [XYZ.sub.Correct] and [XYZ.sub.Spectrum] are converted to Lab color space corresponding to CIE DE2000. The color space conversion equations are shown in Equation (16) to Equation (18):
(32)
(33) Where ƒ(n) is shown in Equation (19) below:
(34)
(35) The average chromatic aberration of the white light image before correction is 11.4, the average chromatic aberration after correction is 2.84, and the average chromatic aberration of the narrowband image before correction is 29.14, the average chromatic aberration after correction is 2.58.
(36) In step S10, as shown in
(37) Following the above, in step S14, the host takes out a plurality of corresponding first hyperspectral eigenvalues F1 according to the hyperspectral image HYI. In step S16, the first hyperspectral eigenvalue F1 obtained by the host 10 in step S14 is used to perform principal component analysis (PCA) calculations. To simplify the calculation result and filter out lower changes, the hyperspectral image HYI is simplified and a plurality of second eigenvalues F2 are generated. The calculation equation of PCA is shown in Equation (20) below:
y.sub.i=a.sub.j1(x.sub.1i−
(38) x.sub.1i to x.sub.ni represent the spectral intensities of the first to the n.sup.th wavelengths;
(39) Furthermore, by using the correction value [XYZ.sub.Correct] obtained above with the reflection spectrum data corresponding to the above 24 color checker [R.sub.Spectrum], the corresponding conversion matrix M is obtained from Equation (21) below:
[M]=[Score]×pinv([V.sub.Color]) Equation (21)
[S.sub.Spectr].sub.380-780=[EV][M][V.sub.Color] Equation (22)
(40) Where [Score] is a plurality of principal components (EV) obtained from the reflectance spectrum data [R.sub.Spectrum] through the principal component analysis. In this embodiment, we use 12 sets of principal components with a better explanatory ability (the weight percentages are 88.0417%, 8.2212%, 2.6422% 0.609%, 0.22382%, 0.10432%, 0.054658%, 0.0472%, 0.02638%, 0.012184%, 0.010952%, acid 0.0028714%) to make dimensionality reducing operation and thus obtains the simulated spectrum [S.sub.Spectrum].sub.380-780, the error between the [S.sub.Spectrum].sub.380-780 and the inputted image IMG [XYZ.sub.Spectrum] is corrected from 11.6 to 2.85 in a white light image, and from 29.14 to 2.60 in the narrowband image. Thus, the color error that cannot be easily recognized by the naked eye is achieved, and it is convenient for the user to obtain a better color reproduction performance when the user has a color reproduction requirement. Therefore, it has simulated a better hyperspectral image in the visible light band.
(41) In step S20, as shown in
(42) Refer to
(43) In step S30, as shown in
(44)
(45) From Equation (2) and Equation (3), the height and width are calculated according to the side length s.sub.k:
h.sub.k=s.sub.k√{square root over (a.sub.r)} Equation (2)
w.sub.k=s.sub.k√{square root over (a.sub.r)} Equation (3)
(46) The h.sub.k represents the frame height of the rectangle in the k.sup.th characteristic map under prior inspection, w.sub.k represents the frame width of the rectangle in the k.sup.th characteristic map under prior inspection, and a.sub.r represents the aspect ratio of the pre-set frame D, a.sub.r>0.
(47) In step S40, as shown in
Pre-set frame D location,d=(d.sup.cx,d.sup.cy,d.sup.w,d.sup.h) Equation(4)
Boundary frame B location,b=(b.sup.cx,b.sup.cy,b.sup.w,b.sup.h) Equation (5)
Zooming factor,l=(l.sup.cx,l.sup.cy,l.sup.w,i.sup.h) Equation(6)
b.sup.cx=d.sup.wl.sup.cx+d.sup.cx Equation(7)
b.sup.cy=d.sup.hl.sup.cy+d.sup.cy Equation (8)
b.sup.w=d.sup.w exp(l.sup.w) Equation (9)
b.sup.h=d.sup.h exp(l.sup.h) Equation (10)
(48) First, align the center coordinates of the boundary frame B with the center coordinates of the prediction detection frame D, which means “translating” the center point of the boundary frame B to the center point of the predictive detection frame D; that is, the first center De and the second center Bc in
(49) To accurately define the position of the testing object image O1, it further works with the loss equation, as showing in Equation (8) below:
L.sub.loc(x,l,g)=Σ.sub.i∈Pos.sup.NΣ.sub.m∈{cx,cy,w,h}x.sub.ij.sup.ksmooth.sub.L1(l.sub.i.sup.m−ĝ.sub.j.sup.m) Equation (8)
(50) It thus has verified the error between the locations of the pre-set predictive detection frame D and the testing object image O1.
(51) In step S60, as shown in
(52)
(53) In summary, the method for detecting object images using hyperspectral imagings disclosed in this invention provides a host to obtain hyperspectral image information, and then it converts the inputted image into a hyperspectral image according to the hyperspectral image information to continue to run the convolution program, letting the host construct a convolutional neural network to convolve the inputted image of the image capture unit, and filtering out the area to be detected. Therefore, it can set up a predictive detection frame on the inputted image, and using the regression calculation to determine the location of the testing object image using the boundary frame and compares the testing object image with the sample images, using the comparison result to categorize the target object image and the non-target object image.