HIGH-THROUGHPUT OPTICAL SECTIONING THREE-DIMENSIONAL IMAGING SYSTEM
20210333536 · 2021-10-28
Inventors
- Qingming LUO (Suzhou Jiangsu, CN)
- Jing YUAN (Suzhou Jiangsu, CN)
- Qiuyuan ZHONG (Suzhou Jiangsu, CN)
- Rui JIN (Suzhou Jiangsu, CN)
- Hui GONG (Suzhou Jiangsu, CN)
Cpc classification
G02B21/008
PHYSICS
G02B21/0032
PHYSICS
H04N13/221
ELECTRICITY
G01N1/286
PHYSICS
G01N2021/1787
PHYSICS
G02B21/367
PHYSICS
G06T11/006
PHYSICS
International classification
G02B21/36
PHYSICS
G02B27/09
PHYSICS
Abstract
A high-throughput optical sectioning three-dimensional imaging system which includes: a light beam modulation module configured to modulate a light beam into a modulated light beam capable of being focused on a focal plane of an objective lens and being defocused on a defocusing plane of the objective lens; an imaging module configured to employ a camera to image, in different rows of pixels, a sample under illumination of the modulated light beam; a cutting module configured to cut off an imaged surface layer of the sample; a demodulation module configured to demodulate a sample image of one sample strip of one surface layer into an optical sectioning image, and reconstruct the optical sectioning image of each sample strip of each surface layer into a three-dimensional image. The present disclosure achieves imaging of a whole sample by dividing the sample into at least one surface layer, dividing the at least one surface layer into at least one sample strip, and imaging each sample strip. When a multi-layer imaging cannot be performed, the imaged part can be cut off by the cutting module to realize imaging of any layer of the sample, thereby improving the imaging speed and efficiency.
Claims
1. A high-throughput optical sectioning three-dimensional imaging system, comprising: a light beam modulation module configured to modulate a light beam into a modulated light beam capable of being focused on a focal plane of an objective lens and being defocused on a defocusing plane of the objective lens, the modulated light beam having incompletely identical modulated intensities on the focal plane of the objective lens; an imaging module configured to employ a camera to image, in different rows of pixels, at least one sample strip of at least one surface layer of a same sample under illumination of the modulated light beam; a cutting module configured to cut off an imaged surface layer of the sample; a demodulation module configured to demodulate a sample image of one sample strip of one surface layer into an optical sectioning image, and reconstruct the optical sectioning image of each sample strip of each surface layer into a three-dimensional image.
2. The high-throughput optical sectioning three-dimensional imaging system according to claim 1, wherein a formula expression of the sample image of the sample strip in the imaging module is:
I(i)=I.sup.inf(i)+I.sup.out where I(i) is a sample image formed in an i.sup.th row of pixels, f(i) is a modulation intensity corresponding to the sample image I(i), I.sup.in is a focal plane image of the sample image, and I.sup.out is a defocusing plane image of the sample image; and a demodulation formula in the demodulation module is:
I.sup.in=c×|βI.sub.1−αI.sub.2| where α and β are positive integers, c is a constant greater than 0, I.sub.1 is an accumulated sum of sample images acquired in α pixels, and I.sub.2 is an accumulated sum of sample images acquired in β pixels, and an accumulated value of modulation intensities corresponding to the sample images in the α pixels is different from an accumulated value of modulation intensities corresponding to the sample images in the β pixels.
3. The high-throughput optical sectioning three-dimensional imaging system according to claim 2, wherein the imaging module comprises: a driving unit configured to drive the light beam modulation module and the sample to move relative to each other in three directions perpendicular to one another; and an imaging unit configured to perform continuous imaging along a lengthwise direction of the sample strip, the lengthwise direction of the sample strip being the same as one of the directions along which the light beam modulation module and the sample move relative to each other.
4. The high-throughput optical sectioning three-dimensional imaging system according to claim 3, wherein an imaging area of the camera in the imaging module has N rows of pixels, where N≥2; two directions X and Y perpendicular to each other are formed on a plane parallel to an imaging plane of the sample, and the modulated light beam has following characteristics in the X and Y directions respectively: the modulated light beam having incompletely identical modulated intensities along the X direction on the N rows of pixels, and the modulated light beam having a same modulated intensity along the Y direction on each row of the N rows of pixels; the pixel being a row pixel, and the sample image being a strip image.
5. The high-throughput optical sectioning three-dimensional imaging system according to claim 4, further comprising an image block acquisition unit and a stitching unit, wherein the image block acquisition unit is configured to acquire a strip image block of an i.sup.th row of pixels in each image frame of a sample strip obtained in an chronological order, and a formula expression of the strip image block is I.sub.t(i)=I.sub.m.sup.inf(i)+I.sub.m.sup.out, where I.sub.t(i) is a strip image block corresponding to the i.sup.th row of pixels in a t.sup.th image frame, I.sub.m.sup.in is a focal plane image of the strip image block corresponding to I.sub.t(i), that is, I.sub.m.sup.in is a focal plane image of a m.sup.th strip image block in a complete strip image, I.sub.in.sup.out is a defocusing plane image of the strip image block corresponding to I.sub.t(i), and f(i) is a modulation intensity corresponding to the i.sup.th row of pixels; and the stitching unit is configured to successively stitch strip image blocks of the i.sup.th row of pixels in each image frame of the sample strip to obtain a strip image of the i.sup.th row of pixels according to the formula of I(i)=Σ.sub.i.sup.M+i−1I.sub.t(i), where M is a number of strip image blocks corresponding to the complete strip image, and m≤M.
6. The high-throughput optical sectioning three-dimensional imaging system according to claim 5, wherein the demodulation module comprises an image accumulation unit, a demodulation unit and a reconstruction unit, the image accumulation unit is configured to accumulate strip images of at least one row of pixels of one sample strip to form a first strip image, and accumulate strip images of at least one row of pixels of the one sample strip to form a second strip image, the demodulation unit is configured to demodulate the first strip image and the second strip image into the optical sectioning image of the strip image according to the demodulation formula, then I.sup.in=ΣI.sub.m.sup.in, and the reconstruction unit is configured to reconstruct optical sectioning images of a plurality of sample strips into the three-dimensional image.
7. The high-throughput optical sectioning three-dimensional imaging system according to claim 6, wherein a single frame exposure duration of imaging by the imaging unit is equal to a duration spent by the light beam modulation module and the sample moving by one row of pixels relative to each other along the lengthwise direction of the sample strip, and a distribution direction and width of the N rows of pixels are the same as and in an object-image conjugate relationship with a distribution direction and width of the modulated light beam respectively.
8. The high-throughput optical sectioning three-dimensional imaging system according to claim 7, wherein the light beam modulation module comprises a shaping optical path for shaping illumination light into a linear light beam and a modulation optical path for modulating the linear light beam into the modulated light beam of linear light illumination.
9. The high-throughput optical sectioning three-dimensional imaging system according to claim 8, wherein the shaping optical path comprises a laser light source, a first lens, a second lens, and a cylindrical lens which are sequentially arranged along a travel direction of the illumination light; and the modulation optical path comprises a third lens configured to modulate divergent light of the linear light beam into parallel light, a dichroic mirror configured to modulate an incident direction of the linear light beam, and an objective lens arranged coaxially with the linear light beam the incident direction of which has been modulated.
10. The high-throughput optical sectioning three-dimensional imaging system according to claim 9, wherein the driving unit is a translation stage configured to drive the sample to move in three directions perpendicular to one another, the translation stage is located at a side of the objective lens away from the dichroic mirror, and the translation stage is perpendicular to an optical axis of the modulated light beam; and the cutting module comprises one or more of a vibrating blade cutter, a diamond cutter, and a hard alloy cutter.
Description
BRIEF DESCRIPTION OF DRAWINGS
[0012]
[0013]
[0014]
[0015]
[0016]
DETAILED DESCRIPTION
[0017] In order to make objects, technical solutions, and advantages of the present disclosure more apparent, the present disclosure will be further described in detail below with reference to the accompanying drawings and embodiments. It should be appreciated that the specific embodiments described herein are merely intended to explain the present disclosure and are not intended to limit the present disclosure.
[0018] As shown in
[0019] The light beam modulation module 11 is configured to modulate a light beam into a modulated light beam 11b capable of being focused on a focal plane of an objective lens 117 and capable of being defocused on a defocusing plane of the objective lens 117, and the modulated light beam 11b has incompletely identical modulated intensities on the focal plane of the objective lens 117. The light beam modulation module 11 includes a shaping optical path for shaping illumination light into a linear light beam 11a and a modulation optical path for modulating the linear light beam 11a into the modulated light beam 11b for linear light illumination.
[0020] The light beam modulation module 11 may be composed of a laser light source 111, a first lens 112, a second lens 113, a cylindrical lens 114, a third lens 115, a dichroic mirror 116 and an objective lens 117, which are sequentially arranged along the direction of the light. The laser light source 111, the first lens 112, the second lens 113 and the cylindrical lens 114 form the shaping optical path, and the third lens 115, the dichroic mirror 116 and the objective lens 117 form the modulation optical path. During the light shaping, the laser light source 111 emits illumination light which is sequentially processed by first lens 112 and the second lens 113 so as to be an expanded light beam. The expanded light beam is modulated by the cylindrical lens 114 to form the linear light beam 11a. The linear light beam 11a is a divergent light. Then, the linear light beam 11a is modulated by the third lens 115 to form the parallel light rays. Then, the dichroic mirror 116 modulates an incident direction of the line light beam 11a, and then the linear light beam 11a enters the objective lens 117 to form the modulated light beam 11b for linear light illumination which can be focused on the focal plane of the objective lens 117 and can diverge on the defocusing plane of the objective lens 117. In order to facilitate subsequent imaging, an optical axis of the modulated light beam 11b is perpendicular to an optical axis of the illumination light and an optical axis of the linear light beam 11a which has not been reflected, that is, the first lens 112, the second lens 113, the cylindrical lens 114 and the third lens 115 are arranged coaxially, and central axes of the first lens 112, the second lens 113, the cylindrical lens 114 and the third lens 115 are arranged perpendicular to a central axis of the objective lens 117. Furthermore, the angle between the dichroic mirror 116 and the optical axis of the modulated light beam 11b is 45 degrees, ensuring that the width of the linear light beam 11a after being reflected by the dichroic mirror 116 does not change.
[0021] In the present embodiment, the illumination light is firstly shaped into a linear light beam 11a, and then the linear light beam 11a is modulated into the modulated light beam 11b for linear illumination. In the present embodiment, a sample 20 is illuminated by the linear modulated light beam 11b that can be focused on the focal plane of the objective lens 117 and can diverge on the defocusing plane of the objective lens 117, which can facilitate exciting the sample 20 to emit fluorescence, thereby facilitating subsequent imaging.
[0022] Here, the above-mentioned modulated light beam 11b in the focal plane of the objective lens has been specifically subject to a waveform modulation with incompletely identical modulation intensities, for example, Gaussian modulation, sinusoidal modulation, or triangular modulation or the like with incompletely identical modulation intensities. Since the illumination light beam of the present embodiment adopts a Gaussian beam, the modulated beam 11b formed in the present embodiment is formed by the Gaussian modulation. This embodiment may also use other waveform modulations with incompletely identical modulation intensities as needed.
[0023] The imaging module 12 is configured to image, in different rows of pixels, at least one sample strip of at least one surface layer of the sample 20 under illumination of the modulated light beam 11b. The imaging module 12 includes a driving unit 121, an imaging unit 122, an image block acquisition unit 123 and a stitching unit 124. The driving unit 121 is configured to drive the light beam modulation module 11 and the sample 20 to move relative to each other in three directions perpendicular to one another. The imaging unit 122 is configured to perform continuous imaging along a lengthwise direction of the sample strip, and the lengthwise direction of the sample strip is the same as one of the directions along which the light beam modulation module 11 and the sample 20 move relative to each other.
[0024] In order to cooperate with the light beam modulation module 11, the driving unit 121 in this embodiment may adopt a three-dimensional motorized translation stage. The sample 20 may be placed on the three-dimensional motorized translation stage. The three-dimensional motorized translation stage can drive the sample 20 to move laterally and longitudinally in a horizontal plane, and can drive the sample 20 to move up and down in a vertical plane, thereby realizing driving the light beam modulation module 11 and the sample 20 to move relative to each other in the three directions perpendicular to one another. It can be appreciated that the driving unit 121 of the present embodiment is not limited to drive the sample 20 to move in three directions perpendicular to one another, and may also drive the light beam modulation module 11 to move in three directions perpendicular to one another.
[0025] When specifically arranged, the three-dimensional motorized translation stage may be located directly below the objective lens 117, and an upper surface of the three-dimensional motorized translation stage is in a horizontal state, and the central axis of the objective lens 117 is perpendicular to the upper surface of the three-dimensional motorized translation stage.
[0026] The imaging unit 122 is constituted by an imaging optical path, and is composed of an emission filter 122a, a tube lens 122b and an imaging camera 122c which are located directly above the objective lens 117. The fluorescence from the sample 20 excited under the action of the modulated light beam 11b passes through the objective lens 117, the dichroic mirror 116, the emission filter 122a and the tube lens 122b sequentially, and then is detected and imaged by the imaging camera 122c. Here, the imaging camera 122c of the present embodiment may be a planar array CCD (Charge-coupled device) or planar array CMOS (Complementary Metal Oxide Semiconductor) camera having a function of Sub-array or ROI (Region of interest), or may be a linear array CCD or linear array CMOS camera having an array mode. In order to facilitate subsequent reconstruction of an optical sectioning image, an imaging area of the imaging camera 122c in this embodiment has N rows of pixels, where N and the imaging direction and width of the imaging area of the imaging camera 122c are the same as the direction and width of the modulated light beam 11b for linear light illumination, respectively.
[0027] For the convenience of imaging, the sample 20 of the present embodiment may be in a rectangular block shape. Therefore, when three-dimensional imaging is performed, the sample 20 may be provided to be composed of a sample body and a solid medium wrapped around the sample body, and the solid medium is generally agar, paraffin or resin. Here, the sample 20 may be divided into a plurality of surface layers uniformly arranged from top to bottom, which are respectively a first surface layer, a second surface layer, a third surface layer, etc. Each surface layer is divided into a plurality of sample strips arranged uniformly in the longitudinal direction, which are respectively a first sample strip, a second sample strip, a third sample strip, etc. The width of the sample strip may be set to be the same as the width of the N rows of pixels of the imaging camera 122c.
[0028] As shown in
[0029] In the imaging process of each sample strip, sample images formed in different rows of pixels are expressed by formula
I(i)=I.sup.inf(i)+I.sup.out,
[0030] where I(i) is a sample image formed in an row of pixels, f(i) is a modulation intensity corresponding to the sample image I(i), I.sup.in is a focal plane image of the sample image, and Put is a defocusing plane image of the sample image.
[0031] The N rows of pixels of the imaging camera 122c are arranged in a lateral direction which is the same as the movement direction of the sample strip, so as to facilitate imaging of the sample strip of the sample 20 in different rows of pixels respectively. When imaging one sample strip, a single frame exposure duration of the imaging camera 122c is equal to a duration spent by the sample 20 moving by one row of pixels. If an image corresponding to any row of pixels in one image frame is defined as one strip image block, a plurality of strip image blocks corresponding to any row of pixels in multiple image frames are formed by continuous and sequential imaging of each part of the sample 20 and may be stitched into one strip image, and the N rows of pixels may form N strip images. As shown in
[0032] As shown in
[0033] The image block acquisition unit 123 in this embodiment is configured to acquire a strip image block of an i.sup.th row of pixels in each image frame of a sample strip obtained in an chronological order, and the strip image block is expressed by the formula:
I.sub.t(i)=I.sub.m.sup.inf(i)+I.sub.m.sup.out
[0034] where I.sub.t(i) is a strip image block corresponding to the i.sup.th row of pixels in the i.sup.th image frame, I.sub.m.sup.in is a focal plane image of the strip image block corresponding to I.sub.t(i), that is, I.sub.m.sup.in is a focal plane image of the m.sup.th strip image block in a complete strip image, I.sub.m.sup.out is a defocusing image of the strip image block corresponding to I.sub.t(i), and f(i) is a modulation intensity corresponding to the i.sup.th row of pixels.
[0035] The stitching unit 124 is configured to successively stitch strip image blocks of the i.sup.th row of pixels in each image frame of the sample strip to obtain a strip image of the row of pixels according to the formula of:
I(i)=Σ.sub.i.sup.M+i−1I.sub.t(i),
[0036] where M is a number of strip image blocks corresponding to the complete strip image, and specifically, the strip image is formed by stitching M strip image blocks, where I.sub.m.sup.in is a focal plane image corresponding to the m.sup.th strip image block in the strip image, and m≤M.
[0037] It should be noted that, a strip image is formed by shifting and stitching a plurality of strip image blocks corresponding to a row of pixels, and is the above-described strip image, that is, N rows of pixels may be respectively stitched to form N strip images.
[0038] The demodulation module 14 is configured to demodulate the strip image of one sample strip of one surface layer into an optical sectioning image, and reconstruct the optical sectioning image of each sample strip of each surface layer into a three-dimensional image.
[0039] The demodulation module 14 may include an image accumulation unit 141, a demodulation unit 142, and a reconstruction unit 143. The image accumulation unit 141 is configured to accumulate strip images of at least one row of pixels of one sample strip to form a first strip image, and accumulate strip images of at least one row of pixels of the one sample strip to form a second strip image. The demodulation unit 142 is configured to demodulate the first strip image and the second strip image into an optical sectioning image. The reconstruction unit 143 is configured to reconstruct optical sectioning images of a plurality of sample strips into a three-dimensional image.
[0040] When the N strip images are acquired, one or two or more of the strip images may be arbitrarily selected to accumulate and form the first strip image. Then, the second strip image is obtained by accumulation in the same manner. In order to avoid that the optical sectioning image acquired by the above demodulation algorithm is zero, in this embodiment, an accumulated value of the modulation intensities corresponding to the strip images in a pixels and an accumulated value of the modulation intensities corresponding to the strip images in β pixels may be different.
[0041] After the accumulation, the demodulation unit 142 may obtain a focal plane image (that is, an optical sectioning image) of the corresponding sample strip according to the following demodulation algorithm, and the demodulation formula of the demodulation algorithm adopted by the demodulation unit 142 is
I.sup.in=c×|βI.sub.1−αI.sub.2|,
[0042] where α and β are positive integers, c is a constant greater than 0, I.sub.1 is an accumulated sum of strip images acquired in a pixels, and I.sub.2 is an accumulated sum of sample images acquired in β pixels; an accumulated value of modulation intensities corresponding to the sample images in the α pixels is different from an accumulated value of modulation intensities corresponding to the sample images in the β pixels.
[0043] Since each strip is formed by stitching a plurality of strip image blocks, I.sup.in=ΣI.sub.m.sup.in.
[0044] For the convenience of explanation of the acquisition process of the strip image of the present embodiment, the following embodiments will be described.
Embodiment 1
[0045] As shown in
[0046] As shown in
Embodiment 2
[0047] As shown in
[0048] If I.sub.1 is an accumulated sum of the sample images acquired in the first, second and third rows of pixels, that is I.sub.1=Σ.sub.1.sup.MI.sub.t(1)+Σ.sub.2.sup.M+1I.sub.t(2)+Σ.sub.3.sup.M+2I.sub.t(3), and I.sub.2 is an accumulated sum of the sample images acquired in the fourth row of pixels, that is I.sub.2=Σ.sub.4.sup.M+3I.sub.t(4), correspondingly, the value of a should be selected as 3, and the value of β should be selected as 1. |(I(1)+I(2)+I(3))−3I(4)|=|(Σ.sub.1.sup.MI.sub.t(1)+Σ.sub.2.sup.M+1I.sub.t(2)+Σ.sub.3.sup.M+2I.sub.t(3))−3Σ.sub.4.sup.M+3I.sub.t(4)|=|(f(1)+f(2)+f(3))−3f(4)|ΣI.sub.m.sup.in can be obtained from the demodulation formula, therefore I.sup.in=Σ.sub.m.sup.in=|(Σ.sub.1.sup.MI.sub.t(1)+Σ.sub.2.sup.M+1I.sub.t(2)+Σ.sub.3.sup.M+2I.sub.t(3))−3Σ.sub.4.sup.M+3I.sub.t(4)|/|(f(1)+f(2)+f(3))−3f(4)|=|I.sub.1−3I.sub.2|/|(f(1)+f(2)+f(3))−3f(4)|.
[0049] The optical sectioning images of individual sample strips may be sequentially obtained by the demodulation algorithm, and the reconstruction unit 143 may stitch all the optical sectioning images to form a stereoscopic three-dimensional image.
[0050] It should be noted that when the longitudinal width of the sample 20 is smaller than the width of the imaging region of the N rows of pixels of the imaging camera 122c, each surface layer has only one sample strip, and the sample 20 does not need to move longitudinally during the imaging process. When the longitudinal width of the sample 20 is smaller than the width of the imaging area of the N rows of pixels of the imaging camera 122c and the thickness of the sample 20 is smaller than the depth to which the imaging camera 122c can perform imaging, for example, the sample only have two surface layers, then the sample 20 only needs to move back and forth once in the lateral direction, and it is not necessary for the cutting module 13 to cut off any surface layer. When the width of the sample 20 is smaller than the width of the imaging area of the N rows of pixels of the imaging camera 122c and the thickness of the sample 20 is smaller than the set thickness of one surface layer, the sample 20 only needs to be subject to scanning imaging only once, which may be considered as two-dimensional imaging. It can be seen from the above that, in this embodiment, a three-dimensional image is formed by superimposing a plurality of two-dimensional images.
[0051] The specific embodiments of the present disclosure described above do not constitute a limitation to the scope of protection of the present disclosure. Various other corresponding changes and modifications made in accordance with the technical idea of the present disclosure should be included within the scope of protection of the claims of the present disclosure.
[0052] Specific embodiments disclosed above in the disclosure can not construed as limiting the scope of protection of the disclosure. Various other corresponding changes and modifications made in accordance with the technical conception of the present disclosure should be included within the scope of protection of the claims of the present disclosure.