FAST VOLUMETRIC IMAGING SYSTEM AND PROCESS FOR FLUORESCENT TISSUE STRUCTURES AND ACTIVITIES
20220122284 ยท 2022-04-21
Assignee
Inventors
Cpc classification
G02B21/365
PHYSICS
H04N23/671
ELECTRICITY
G01N21/6428
PHYSICS
International classification
G02B21/36
PHYSICS
Abstract
A microscopic technique for generating high-clarity, large volume 3D images of fluorescent tissue structure at subcellular resolution and capture transient activities. The technique includes capturing two orthogonal 2D projection of the sample volume by performing a projection scan with an excitation laser sweeping through the volume at up to 100 vps, tracking the scan depth using an electrically tuned lens to keep the emission image in focus and generate an xy plane volume projection image at the camera; and placing a PMT behind the excitation lens to collect emission passed through the excitation lens, wherein signals from the PMT form a focus scan projection at the yz plane; and then merging the xy and yz projections.
Claims
1. A method for fast volumetric imaging of fluorescent tissue structures and activities, comprising the steps of: a. acquiring two orthogonal 2D projection of the sample volume, comprising: i. performing a projection scan with an excitation laser sweeping through the volume at a predetermined rate; ii. tracking the scan depth using a focus tuning device to keep the emission image in focus and generate an xy plane volume projection image at the camera; and iii. placing a photomultiplier tube (PMT) behind the excitation lens to collect emission passed through the excitation lens, wherein signals from the PMT form a focus scan projection at the yz plane; and b. merging the xy and yz projections to locate positions of fluorescence emitters in 3D.
2. The method according to claim 1, wherein the predetermined rate is up to 100 vps.
3. The method according to claim 1, wherein the focus tuning device is an electrically tunable lens (ETL).
4. The method according to claim 1, further comprising the step of labeling cell structure tissue of the sample volume with a first fluorescent emitting marker.
5. The method according to claim 4, further comprising the step of labeling cell function tissue of the sample volume with a second fluorescent emitting marker.
6. The method according to claim 1, further comprising the step of tracing the structures seen in 2D projections.
7. The method according to claim 6, further comprising the step of reconstructing the 3D structure map by: i. pairing the xy projection of a trace with its yz projection; ii. looping though all xy plane pixel defined by the xy trace and assigning a z-value according to the corresponding yz projection trace result; iii. searching for pixels on the yz projection trace that have the same y value at a given xy pixel on the trace; iv. assigning a xy pixel with a single match the z-value of the matched yz pixel; v. comparing potential z-values with previously assigned z-values of adjacent xy pixels and assigning the xy pixel to a z-value that is closest to adjacent pixels; and vi. repeating (steps i.-v.) on all traces.
8. The method according to claim 7, further comprising the step of looping through all pixels in the xy projection image and assigning pixels that have an observable intensity to the z-value of its nearest trace pixel.
9. The method according to claim 1, further comprising the step of segmenting the structures seen in 2D projections.
10. The method according to claim 9, further comprising the step of reconstructing the 3D structure map by vii. pairing the xy projection of a segment with its yz projection; viii. looping though all xy plane pixel defined by the xy segment and assigning a z-value according to the corresponding yz projection segment result; ix. searching for pixels on the y-z projection segment that have the same y-value at a given x-y pixel on the segment; x. assigning a xy pixel with a single match the z-value of the matched yz pixel; xi. comparing potential z-values with previously assigned z-values of adjacent xy pixels, and assigning the xy pixel to a z-value that is closest to adjacent pixels; and xii. repeating (steps i.-v.) on all segments.
11. The method according to claim 10, further comprising the step of looping through all pixels in the xy projection image and assigning pixels that have an observable intensity to the z-value of its nearest segment pixel.
12. A system for fast volumetric imaging of fluorescent tissue structure and activities, comprising: a. first and second lenses each orthogonally positioned relative to the fluorescent tissue structure for acquiring two orthogonal 2D projections of the sample volume; b. an excitation laser for performing a projection scan by sweeping through the volume at a predetermined rate; c. a focus tuning device to track the scan depth and keep the emission image in focus and generate an xy plane volume projection image; and d. a photomultiplier tube positioned behind the first lens.
Description
BRIEF DESCRIPTION OF THE DRAWINGS
[0030] The present invention will be more fully understood and appreciated by reading the following Detailed Description in conjunction with the accompanying drawings, in which:
[0031]
[0032]
[0033]
[0034]
[0035]
[0036]
[0037]
[0038]
DETAILED DESCRIPTION OF EMBODIMENTS
[0039] The present disclosure describes fast volumetric imaging of fluorescent tissue structures and activities.
[0040] To speed up 3D imaging in a light sheet instrument, the present invention breaks away from the traditional plane-scanning approach and implements volumetric projection imaging instead. The two-lens framework provides the unique opportunity of acquiring two orthogonal 2D projection of the sample volume (see,
[0041] The technique combines Bessel focus scan with depth-tuned light sheet imaging. The combination gives the present method intrinsic 3D resolution. Compared to vTwINS, which generates two views at a small angle, two projections in the present method orientate in an ideal angle for 3D imaging. The resolution of the present method is expected to be the same as traditional 2p point scanning imaging. Unlike vTwINS, in which two views tangle together in a single image and require complex image processing, the present method generates two separate projection images that are easy to process. With simple 2D tracing and a MATLAB program to merge xy and yz traced paths, it is possible to produce 3D traces of the dendrite branch seen in the pair of projections (
[0042] The dual-projection method collects emission from both lenses and has twice more photons than existing methods. The extra photon efficiency put the present method in a better position for fast volumetric imaging.
[0043] Volumetric projection imaging is faster than true 3D imaging, but it may have difficulties in resolving different layers when the 3D structure is complex. To overcome this potential problem and expand imaging ability in complex signal networks, high-resolution 3D imaging can be combined with fast volumetric projection imaging. Tissue is labeled with two fluorescent emitting markers, one for cell structure and one for function, respectively. The high-resolution 3D imaging will be performed first on structure markers. The volumetric projection imaging will be captured on both markers. Projection images of the structure marker will assist in aligning functional projection images with the high-resolution 3D images and correct any sample movement. Observed activities in function projection images will be casted to aligned 3D structure. Since in many cases activities are sparse, such approach will enable studying functions of structurally complex networks with projection imaging.
[0044] The computer program for assigning 3D structures imaged by two orthogonal projections comprises program instructions stored on a non-transitory memory of a computer and run on a processor. Before running the program, the user needs to trace, or segmenting structures seen in two 2D projections using existing imaging processing software and save the tracing or segmenting results. The program uses these results to reconstruct the 3D structure map according to the following method:
[0045] First, load in tracing or segmenting results.
[0046] Second, use a user defined list to pair the x-y projection of a trace or segment with its y-z projection.
[0047] Third, loop though all x-y plane pixel defined by the x-y trace or segment and assign a z (depth) value according to the corresponding y-z projection trace or segment result. The assignment process is carried out as:
[0048] Fourth, at a given x-y pixel on the trace or the segment, search for pixels on the y-z projection trace or segment that have the same y value. These y-z pixels provide potential z-values for the x-y pixel.
[0049] If a single match is found, the x-y pixel is given the z-value of the matched y-z pixel.
[0050] If multiple matches are found, the program compares potential z-values with previously assigned z-values of adjacent x-y pixels, and assign the x-y pixel to a z-value that is closest to adjacent pixels. This method considers that the trace or segment is continuous in the space, and the z-value difference between adjacent pixels is small.
[0051] Fifth, processes described in from the second to the fourth step are repeated on all traces or segments. This step generates traces or segments data fully mapped in 3D.
[0052] Last, the program loops through all pixels in the x-y projection image. Pixels that have observable intensity will be assigned to the z-value of its nearest trace (or segment) pixel. This approach is based on the fact that fine structures attached to a trace or segmented structures are roughly of the same depth as the main structure.
[0053] While various embodiments have been described and illustrated herein, those of ordinary skill in the art will readily envision a variety of other means and/or structures for performing the function and/or obtaining the results and/or one or more of the advantages described herein, and each of such variations and/or modifications is deemed to be within the scope of the embodiments described herein. More generally, those skilled in the art will readily appreciate that all parameters, dimensions, materials, and configurations described herein are meant to be exemplary and that the actual parameters, dimensions, materials, and/or configurations will depend upon the specific application or applications for which the teachings is/are used. Those skilled in the art will recognize or be able to ascertain using no more than routine experimentation, many equivalents to the specific embodiments described herein. It is, therefore, to be understood that the foregoing embodiments are presented by way of example only and that, within the scope of the appended claims and equivalents thereto, embodiments may be practiced otherwise than as specifically described and claimed. Embodiments of the present disclosure are directed to each individual feature, system, article, material, kit, and/or method described herein. In addition, any combination of two or more such features, systems, articles, materials, kits, and/or methods, if such features, systems, articles, materials, kits, and/or methods are not mutually inconsistent, is included within the scope of the present disclosure.
[0054] The above-described embodiments of the described subject matter can be implemented in any of numerous ways. For example, some embodiments may be implemented using hardware, software or a combination thereof. When any aspect of an embodiment is implemented at least in part in software, the software code can be executed on any suitable processor or collection of processors, whether provided in a single device or computer or distributed among multiple devices/computers.