METHOD FOR EXTRACTING SPECTRAL INFORMATION OF A SUBSTANCE UNDER TEST
20220207856 · 2022-06-30
Inventors
- Min LIU (Shenzhen, CN)
- Zhe REN (Shenzhen, CN)
- Xingchao YU (Shenzhen, CN)
- Jinbiao HUANG (Shenzhen, CN)
- Bin Guo (Shenzhen, CN)
Cpc classification
G01N2021/1765
PHYSICS
G01N21/31
PHYSICS
H04N5/30
ELECTRICITY
G01N2021/555
PHYSICS
G06V10/25
PHYSICS
International classification
G06V10/25
PHYSICS
Abstract
A method for extracting spectral information of a substance under test includes: identifying a pixel region A(x, y) occupied by an object under test from a hyperspectral image acquired; extracting a specular reflection region A.sub.q and a diffuse reflection region A.sub.r from the pixel region A(x, y), and calculating a representative spectrum I.sub.q(ω) of the specular reflection region A.sub.q and a representative spectrum I.sub.r(ω) of the diffuse reflection region A.sub.r, respectively; by comparing each element in the representative spectrum I.sub.q(ω) of the specular reflection region A.sub.q with each element in the representative spectrum I.sub.r(ω) of the diffuse reflection region A.sub.r, separating information of a light source from spectral information of the object to obtain a first spectral invariant C(ω). This method does not require additional spectral information of the light source, which improves the analysis efficiency.
Claims
1. A method for extracting spectral information of a substance under test, comprising the following steps: S1: identify pixel region A(x, y) occupied by an object under test from a hyperspectral image acquired; S2: extracting a specular reflection region A.sub.q and a diffuse reflection region A.sub.r from the pixel region A(x, y), and calculating a representative spectrum I.sub.q(ω) of the specular reflection region A.sub.q and a representative spectrum I.sub.r(ω) of the diffuse reflection region A.sub.r, respectively; S3: by comparing each element in the representative spectrum I.sub.q(ω) of the specular reflection region A.sub.q with each element in the representative spectrum I.sub.r(ω) of the diffuse reflection region A.sub.r, separating information of light source from spectral information of the object to obtain a first spectral invariant C(ω).
2. The method for extracting spectral information of a substance under test according to claim 1, further comprising the following step: S4: performing linear transformation processing on the first spectral invariant C(ω) to obtain a second spectral invariant R(ω), the second spectral invariant R(ω) being used for spectral analysis.
3. The method for extracting spectral information of a substance under test according to claim 1, wherein, in step S1 a first region selection method is used to identify the substance under test and select the pixel region A(x, y), the first region selection method comprising manual labeling, a machine vision algorithm, spectral angle mapping or a deep learning algorithm.
4. The method for extracting spectral information of a substance under test according to claim 1, wherein, step S2 comprises: S21: extracting the specular reflection region A.sub.q and the diffuse reflection region A.sub.r from the pixel region A(x, y) by using a second region selection method; S22: obtaining the representative spectrum I.sub.q(ω) according to the specular reflection region A.sub.q, and obtaining the representative spectrum I.sub.r(ω) according to the diffuse reflection region A.sub.r.
5. The method for extracting spectral information of a substance under test according to claim 4, wherein, the second region selection method comprises principal component analysis, a k-means method, orthogonal projection, or a region selection based on geometric shapes.
6. The method for extracting spectral information of a substance under test according to claim 4, wherein, a method for calculating the representative spectrum I.sub.q(ω) of the specular reflection region A.sub.q and the representative spectrum I.sub.r(ω) of the diffuse reflection region A.sub.r comprises taking the average, taking a brightness weighted average, or a gray world algorithm.
7. The method for extracting spectral information of a substance under test according to claim 6, wherein, average spectra of all pixels in the specular reflection region A.sub.q and the diffuse reflection region A.sub.r are calculated respectively according to the specular reflection region A.sub.q and the diffuse reflection region A.sub.r as the representative spectrum I.sub.q(ω) and the representative spectrum I.sub.r(ω):
8. The method for extracting spectral information of a substance under test according to claim 1, wherein, a method for calculating the first spectral invariant C(ω) in step S3 comprises finite element decomposition, spectral angle separation or division.
9. The method for extracting spectral information of a substance under test according to claim 8, wherein, each element in the representative spectrum I.sub.q(ω) of the specular reflection region A.sub.q is divided by each element in the representative spectrum I.sub.r(ω) of the diffuse reflection region A.sub.r to obtain the first spectral invariant C(ω).
10. The method for extracting spectral information of a substance under test according to claim 2, wherein, step S4 comprises: S41: performing standard normal variate transformation on the first spectral invariant C(ω) to obtain the second spectral invariant R(ω): C(ω)
.sub.ω represents an average value of C(ω) in a wavelength dimension; S42: using the second spectral invariant R(ω) as an input of a chemometric model for spectral analysis of the substance.
11. The method for extracting spectral information of a substance under test according to claim 10, wherein, the chemometric model comprises a partial least-square regression, an artificial neural network or a support vector machine.
12. The method for extracting spectral information of a substance under test according to claim 1, wherein, the hyperspectral image maintains a pixel region A(x, y) occupied by the object under test unchanged in each wavelength band during photographing, and the object under test occupies a certain proportion in the hyperspectral image.
13. A spectral camera, characterized by comprising: a lens, a wavelength splitter, an imaging device, and a data storage and processing device, light emitted from a light source being reflected back from a surface of a substance under test, reaching the imaging device after passing through the lens and the wavelength splitter, and being converted by the data storage and processing device into an electrical signal and a digital signal at different wavelengths, the digital signal being spectral image data comprising spectral information of the light source and substance spectral information of the object on the surface of the object under test, the spectral image data being processed by the method for extracting spectral information of a substance under test according to claim 1 so as to obtain substance properties of the object under test.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0035] The drawings are included to provide a further understanding of the embodiments, and the drawings are incorporated into this specification and constitute a part of this specification. The drawings illustrate the embodiments and together with the description serve to explain the principles of the present application. It will be easy to recognize other embodiments and many expected advantages of the embodiments because they become better understood by referring to the following detailed description. The elements in the drawings are not necessarily in proportion to each other. The same reference numerals refer to corresponding similar components.
[0036]
[0037]
[0038]
[0039]
[0040]
DETAILED DESCRIPTION
[0041] The present application will be further described in detail below with reference to the drawings and embodiments. It can be understood that the specific embodiments described here are only used to explain the relevant application, but not to limit the application. In addition, it is to be further noted that only portions related to the relevant application are shown in the drawings to facilitate description.
[0042] It is to be noted that the embodiments in the present application and the features in the embodiments can be combined with each other in the case of causing no conflict. The present application will be described in detail below with reference to the drawings and in conjunction with the embodiments.
[0043] As shown in
[0044] S1: obtaining the pixel region A(x, y) occupied by an object under test from a hyperspectral image acquired;
[0045] S2: extracting a specular reflection region A.sub.q and a diffuse reflection region A.sub.r from the pixel region A(x, y), and calculating a representative spectrum I.sub.q(ω) of the specular reflection region A.sub.q and a representative spectrum I.sub.r(ω) of the diffuse reflection region A.sub.r, respectively;
[0046] S3: by comparing each element in the representative spectrum I.sub.q(ω) of the specular reflection region A.sub.q with each element in the representative spectrum I.sub.r(ω) of the diffuse reflection region A.sub.r, separating information of the light source and substance spectral information of the object to obtain a first spectral invariant C(ω).
[0047] The representative spectrum I.sub.q(ω) of the specular reflection region A.sub.q contains the spectral information of the substance and the light source information from specular reflection, while the representative spectrum I.sub.r(ω) of the diffuse reflection region A.sub.r only contains the substance spectral information of the object.
[0048] The aforementioned first spectral invariant C(ω) eliminates the influence of the spectrum of the light source by taking advantage of the characteristic that the specular reflection region and the diffuse reflection region contain the same diffuse reflection component but different specular reflection components (i.e. light source components). C(ω) does not change as long as the distance of photographing and the position of the light source do not change. In some engineering scenes, C(ω) can be used directly as the basis of subsequent spectral analysis, thereby effectively eliminating dependence on the light source information.
[0049] Hereinafter, analysis of an apple is taken as an example to describe an embodiment of the present application. In this embodiment, hyperspectral imaging technology is used to quickly predict the sweetness, acidity, hardness, etc. of the apple.
[0050] Firstly, the first step is to perform data collection to obtain the hyperspectral image data of an apple under test. The second step is to acquire the substance spectral information of the object, that is, to extract the substance spectral information of the object of the apple from the hyperspectral data. The third step is to perform analysis of the acquired spectrum of the substance to obtain information such as sweetness, acidity and hardness of the apple, and finally present said information to the user. The method used in the embodiment of the present application is mainly applied in the second step.
[0051] In a specific embodiment, in step S1, the object under test is identified and the pixel region A(x, y) is selected by the first region selection method. The first region selection method includes manual labeling, machine vision algorithm, spectral angle mapping or depth learning algorithm. In other optional embodiments, other methods may also be used to identify the object under test. Obtained hyperspectral image is denoted as I(x,y,ω), where x, y and ω respectively represent the width, height and wavelength of the hyperspectral image, and the first region selection method is used to identify the object under test and select the pixel region A(x, y).
[0052] Firstly, the acquired hyperspectral image needs to meet two requirements. As shown in
[0053] Mathematically, a three-dimensional matrix I(x, y, ω) is used to represent the HSI, where x, y and ω represent the width, height and wavelength of the hyperspectral image respectively, and each element i(x.sub.a, y.sub.a, ω.sub.b) in the matrix represents the light intensity obtained by the pixel at the position (x.sub.a, y.sub.a) of the image at the wavelength ω.sub.b. Therefore, the spectrum can be presented as the vector composed of light intensity data at different wavelengths, for example, i(x.sub.a, y.sub.a, ω.sub.b) represents the spectrum of the pixel at (x.sub.a, y.sub.a).
[0054] The pixel region A(x, y) occupied by the object under test is selected in the acquired hyperspectral image by the first region selection method. In a preferred embodiment, the first region selection method includes manual labeling, machine vision, and spectral angle mapping or deep learning. Other feasible image recognition technologies may also be used. The image recognition technology is very mature at present, thus the object under test can be identified from the hyperspectral image conveniently and accurately, which is also a relatively mature part of the current hyperspectral imaging analysis technology. In an embodiment of the present application, object recognition is performed through deep learning, so as to identify the apple under test in
[0055] In a specific embodiment, as shown in
[0056] S21: extracting a specular reflection region A.sub.q and a diffuse reflection region A.sub.r from the pixel region A(x, y) by a second region selection method;
[0057] S22: obtaining a representative spectrum I.sub.q(ω) according to the specular reflection region A.sub.q, and obtaining a representative spectrum I.sub.r(ω) according to the diffuse reflection region A.sub.r.
[0058] The second region selection method may include principal component analysis, K-means, matrix orthogonal projection, or region selection based on geometric shapes. In a preferred embodiment, the K-means clustering method is used. With two cluster centers specified, pixels in A(x, y) are grouped into two categories according to the spectral lineshape. Since the apple surface is spherical and has on average low reflectance, the average brightness of the specular reflection region is relatively high. Therefore, the category with high average brightness is marked as the specular reflection region A.sub.q, and the category with low average brightness is marked as the diffuse reflection region A.sub.r.
[0059] The method for extracting representative spectra I.sub.q(ω) and I.sub.r(ω) from A.sub.q and A.sub.r may include taking the average, taking a brightness weighted average, or a gray world algorithm, etc. In a preferred embodiment, a method for calculating an average spectrum is used to calculate the average spectra of all pixels in the specular reflection region A.sub.q and the diffuse reflection region A.sub.r respectively according to the specular reflection region A.sub.q and the diffuse reflection region A.sub.r as the representative spectrum I.sub.q(ω) and the representative spectrum I.sub.r(ω):
[0060] wherein N.sub.q and N.sub.r respectively represent the numbers of pixels in the specular reflection region A.sub.q and the diffuse reflection region A.sub.r, and i(x.sub.a, y.sub.a, ω) represents the spectrum of the pixel at the position (x.sub.a, y.sub.a).
[0061] Finally, each element in the representative spectrum I.sub.q(ω) of the specular reflection region A.sub.q is divided by each element in the representative spectrum I.sub.r(ω) of the diffuse reflection region A.sub.r to obtain the first spectral invariant C(ω).
[0062] In a specific embodiment, the method for calculating the first spectral invariant C(ω) in step S3 includes finite element decomposition, spectral angle separation or division. In other optional embodiments, other suitable calculation methods may also be used.
[0063] In a preferred example, each element in the representative spectrum I.sub.q(ω) of the specular reflection region A.sub.q is divided by each element in the representative spectrum I.sub.r(ω) of the diffuse reflection region A.sub.r to obtain the first spectral invariant C(ω):C(ω)=I.sub.q(ω)/I.sub.r(ω).
[0064] In a specific embodiment, the following steps are further included:
[0065] S4: performing linear transformation processing on the first spectral invariant C(ω) to obtain a second spectral invariant R(ω), the second spectral invariant R(ω) being used for spectral analysis.
[0066] In a preferred embodiment, as shown in
[0067] S41: performing standard normal variate transformation on the first spectral invariant C(ω) to obtain a second spectral invariant R(ω):
wherein C(ω)
.sub.ω represents the average of C(ω) in wavelength dimension;
[0068] S42: using the second spectral invariant R(ω) as an input of a chemometric model for spectral analysis of the substance.
[0069] In this step, the chemometric model includes a partial least square regression, an artificial neural network or a support vector machine. Therefore, a chemometric model that has been trained such as a partial least square regression (PLS), an artificial neural network (ANN) or a support vector machine (SVM) can be used to predict contents of the components of an apple and feed them back to the user. Specific steps of this part are not the focus of the present application, and hence will not be described in detail. The above method can simplify the hyperspectral analysis process, simplify the hardware structure, making the hardware of related products simpler and more compact. It can be carried out by means of one single hyperspectral image, which avoids errors resulting from many aspects such as changes in the spectrum of the light source, the baseline drift of the acquisition device, etc. As a result, the accuracy of component analysis can be increased.
[0070] An embodiment of the present application further proposes a hyperspectral imager, as shown in
[0071] An embodiment of the present application discloses a method for extracting spectral information of a substance under test. The method extracts a specular reflection region and a diffuse reflection region from the pixel region of the object under test, and calculate representative spectra of the two regions respectively, so as to calculate a light source-independent first spectral invariant and a second spectral invariant that is independent of the spectrum of the light source, the scene, etc. Since no additional light source spectral information is needed, the part of collecting a reference spectrum can be omitted, which simplifies the analysis process, and reduces the data collection time, thus improves the analysis efficiency. Meanwhile, since there is no need to collect a reference spectrum, at the time of designing corresponding hardware, the optical-electromechanical device of this part can be omitted, leaving hardware of the related product simpler and more compact. Implementation of this method requires one single hyperspectral image, therefore avoids errors resulting from many aspects such as changes in the spectrum of the light source, the baseline drift of the acquisition device, etc., increasing the accuracy of analysis.
[0072] What have been described above are only implementations of the present application or explanations thereof, but the protection scope of the present application is not so limited. Any variation or substitution that can be easily conceived by a skilled person familiar with this technical field within the technical scope revealed by the present application shall be encompassed within the protection scope of the present application. Therefore, the protection scope of the present application shall be based on the protection scope of the claims.